Test Report: KVM_Linux_crio 19529

                    
                      d7f9f66bdcb95e27f1005d5ce9d414c92a72aaf8:2024-08-28:35983
                    
                

Test fail (29/318)

Order failed test Duration
33 TestAddons/parallel/Registry 73.92
34 TestAddons/parallel/Ingress 156.97
36 TestAddons/parallel/MetricsServer 361.56
164 TestMultiControlPlane/serial/StopSecondaryNode 141.88
166 TestMultiControlPlane/serial/RestartSecondaryNode 48.92
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 406.13
171 TestMultiControlPlane/serial/StopCluster 141.64
231 TestMultiNode/serial/RestartKeepsNodes 324.92
233 TestMultiNode/serial/StopMultiNode 141.14
240 TestPreload 222.35
248 TestKubernetesUpgrade 401.51
320 TestStartStop/group/old-k8s-version/serial/FirstStart 303.16
345 TestStartStop/group/no-preload/serial/Stop 139
349 TestStartStop/group/embed-certs/serial/Stop 139.05
351 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.16
352 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
354 TestStartStop/group/old-k8s-version/serial/DeployApp 0.46
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 116.11
356 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.39
362 TestStartStop/group/old-k8s-version/serial/SecondStart 701.33
363 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.2
364 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.21
365 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.15
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.5
367 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 484.02
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 436.23
369 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 362.62
370 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 134.4
x
+
TestAddons/parallel/Registry (73.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.002854ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-95krj" [28ff509c-2b4f-4dbc-ac62-07fa93fce1c0] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004311968s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ds4qv" [1ab53ee3-0865-49b3-8fd0-7f176587e4d5] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004605977s
addons_test.go:342: (dbg) Run:  kubectl --context addons-990097 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-990097 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-990097 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.082340105s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-990097 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 ip
2024/08/28 17:03:37 [DEBUG] GET http://192.168.39.195:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-990097 -n addons-990097
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-990097 logs -n 25: (1.277133468s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-238617 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p download-only-238617                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p download-only-238617                                                                     | download-only-238617 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| start   | -o=json --download-only                                                                     | download-only-382773 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p download-only-382773                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| delete  | -p download-only-382773                                                                     | download-only-382773 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| delete  | -p download-only-238617                                                                     | download-only-238617 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| delete  | -p download-only-382773                                                                     | download-only-382773 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-802579 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC |                     |
	|         | binary-mirror-802579                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34799                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-802579                                                                     | binary-mirror-802579 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| addons  | disable dashboard -p                                                                        | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC |                     |
	|         | addons-990097                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC |                     |
	|         | addons-990097                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-990097 --wait=true                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:54 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:02 UTC | 28 Aug 24 17:02 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:02 UTC | 28 Aug 24 17:02 UTC |
	|         | addons-990097                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-990097 ssh curl -s                                                                   | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-990097 addons                                                                        | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-990097 addons                                                                        | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-990097 ssh cat                                                                       | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | /opt/local-path-provisioner/pvc-a9f55e23-5044-48c9-a5ea-14e15cbb19c6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-990097 ip                                                                            | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 16:52:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 16:52:03.553302   18249 out.go:345] Setting OutFile to fd 1 ...
	I0828 16:52:03.553558   18249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:52:03.553567   18249 out.go:358] Setting ErrFile to fd 2...
	I0828 16:52:03.553572   18249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:52:03.554137   18249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 16:52:03.555206   18249 out.go:352] Setting JSON to false
	I0828 16:52:03.556015   18249 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2070,"bootTime":1724861854,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 16:52:03.556070   18249 start.go:139] virtualization: kvm guest
	I0828 16:52:03.557879   18249 out.go:177] * [addons-990097] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 16:52:03.559933   18249 notify.go:220] Checking for updates...
	I0828 16:52:03.559948   18249 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 16:52:03.561141   18249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 16:52:03.562248   18249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 16:52:03.563381   18249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 16:52:03.564522   18249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 16:52:03.565685   18249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 16:52:03.567058   18249 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 16:52:03.598505   18249 out.go:177] * Using the kvm2 driver based on user configuration
	I0828 16:52:03.599805   18249 start.go:297] selected driver: kvm2
	I0828 16:52:03.599821   18249 start.go:901] validating driver "kvm2" against <nil>
	I0828 16:52:03.599832   18249 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 16:52:03.600482   18249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 16:52:03.600546   18249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 16:52:03.615718   18249 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 16:52:03.615767   18249 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 16:52:03.616004   18249 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 16:52:03.616072   18249 cni.go:84] Creating CNI manager for ""
	I0828 16:52:03.616089   18249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 16:52:03.616099   18249 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 16:52:03.616172   18249 start.go:340] cluster config:
	{Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:52:03.616295   18249 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 16:52:03.618096   18249 out.go:177] * Starting "addons-990097" primary control-plane node in "addons-990097" cluster
	I0828 16:52:03.619317   18249 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 16:52:03.619368   18249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 16:52:03.619389   18249 cache.go:56] Caching tarball of preloaded images
	I0828 16:52:03.619481   18249 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 16:52:03.619495   18249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 16:52:03.619843   18249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/config.json ...
	I0828 16:52:03.619867   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/config.json: {Name:mk1d9cf08f8bf0b3aa1979f7c4b7b4ba59401421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:03.620021   18249 start.go:360] acquireMachinesLock for addons-990097: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 16:52:03.620070   18249 start.go:364] duration metric: took 34.81µs to acquireMachinesLock for "addons-990097"
	I0828 16:52:03.620088   18249 start.go:93] Provisioning new machine with config: &{Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 16:52:03.620159   18249 start.go:125] createHost starting for "" (driver="kvm2")
	I0828 16:52:03.622720   18249 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0828 16:52:03.622873   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:03.622908   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:03.637096   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I0828 16:52:03.637576   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:03.638135   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:03.638159   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:03.638519   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:03.638728   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:03.638904   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:03.639054   18249 start.go:159] libmachine.API.Create for "addons-990097" (driver="kvm2")
	I0828 16:52:03.639083   18249 client.go:168] LocalClient.Create starting
	I0828 16:52:03.639131   18249 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 16:52:03.706793   18249 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 16:52:04.040558   18249 main.go:141] libmachine: Running pre-create checks...
	I0828 16:52:04.040580   18249 main.go:141] libmachine: (addons-990097) Calling .PreCreateCheck
	I0828 16:52:04.041083   18249 main.go:141] libmachine: (addons-990097) Calling .GetConfigRaw
	I0828 16:52:04.041464   18249 main.go:141] libmachine: Creating machine...
	I0828 16:52:04.041477   18249 main.go:141] libmachine: (addons-990097) Calling .Create
	I0828 16:52:04.041686   18249 main.go:141] libmachine: (addons-990097) Creating KVM machine...
	I0828 16:52:04.042940   18249 main.go:141] libmachine: (addons-990097) DBG | found existing default KVM network
	I0828 16:52:04.043689   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.043534   18271 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0828 16:52:04.043707   18249 main.go:141] libmachine: (addons-990097) DBG | created network xml: 
	I0828 16:52:04.043719   18249 main.go:141] libmachine: (addons-990097) DBG | <network>
	I0828 16:52:04.043734   18249 main.go:141] libmachine: (addons-990097) DBG |   <name>mk-addons-990097</name>
	I0828 16:52:04.043744   18249 main.go:141] libmachine: (addons-990097) DBG |   <dns enable='no'/>
	I0828 16:52:04.043754   18249 main.go:141] libmachine: (addons-990097) DBG |   
	I0828 16:52:04.043761   18249 main.go:141] libmachine: (addons-990097) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0828 16:52:04.043768   18249 main.go:141] libmachine: (addons-990097) DBG |     <dhcp>
	I0828 16:52:04.043774   18249 main.go:141] libmachine: (addons-990097) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0828 16:52:04.043781   18249 main.go:141] libmachine: (addons-990097) DBG |     </dhcp>
	I0828 16:52:04.043787   18249 main.go:141] libmachine: (addons-990097) DBG |   </ip>
	I0828 16:52:04.043797   18249 main.go:141] libmachine: (addons-990097) DBG |   
	I0828 16:52:04.043808   18249 main.go:141] libmachine: (addons-990097) DBG | </network>
	I0828 16:52:04.043821   18249 main.go:141] libmachine: (addons-990097) DBG | 
	I0828 16:52:04.048764   18249 main.go:141] libmachine: (addons-990097) DBG | trying to create private KVM network mk-addons-990097 192.168.39.0/24...
	I0828 16:52:04.113488   18249 main.go:141] libmachine: (addons-990097) DBG | private KVM network mk-addons-990097 192.168.39.0/24 created
	I0828 16:52:04.113513   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.113440   18271 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 16:52:04.113526   18249 main.go:141] libmachine: (addons-990097) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097 ...
	I0828 16:52:04.113543   18249 main.go:141] libmachine: (addons-990097) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 16:52:04.113618   18249 main.go:141] libmachine: (addons-990097) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 16:52:04.371432   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.371337   18271 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa...
	I0828 16:52:04.533443   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.533306   18271 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/addons-990097.rawdisk...
	I0828 16:52:04.533482   18249 main.go:141] libmachine: (addons-990097) DBG | Writing magic tar header
	I0828 16:52:04.533524   18249 main.go:141] libmachine: (addons-990097) DBG | Writing SSH key tar header
	I0828 16:52:04.533569   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.533458   18271 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097 ...
	I0828 16:52:04.533617   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097
	I0828 16:52:04.533642   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097 (perms=drwx------)
	I0828 16:52:04.533657   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 16:52:04.533672   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 16:52:04.533690   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 16:52:04.533705   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 16:52:04.533713   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins
	I0828 16:52:04.533724   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 16:52:04.533737   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home
	I0828 16:52:04.533748   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 16:52:04.533762   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 16:52:04.533774   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 16:52:04.533786   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 16:52:04.533798   18249 main.go:141] libmachine: (addons-990097) DBG | Skipping /home - not owner
	I0828 16:52:04.533808   18249 main.go:141] libmachine: (addons-990097) Creating domain...
	I0828 16:52:04.535453   18249 main.go:141] libmachine: (addons-990097) define libvirt domain using xml: 
	I0828 16:52:04.535472   18249 main.go:141] libmachine: (addons-990097) <domain type='kvm'>
	I0828 16:52:04.535482   18249 main.go:141] libmachine: (addons-990097)   <name>addons-990097</name>
	I0828 16:52:04.535497   18249 main.go:141] libmachine: (addons-990097)   <memory unit='MiB'>4000</memory>
	I0828 16:52:04.535505   18249 main.go:141] libmachine: (addons-990097)   <vcpu>2</vcpu>
	I0828 16:52:04.535513   18249 main.go:141] libmachine: (addons-990097)   <features>
	I0828 16:52:04.535525   18249 main.go:141] libmachine: (addons-990097)     <acpi/>
	I0828 16:52:04.535533   18249 main.go:141] libmachine: (addons-990097)     <apic/>
	I0828 16:52:04.535543   18249 main.go:141] libmachine: (addons-990097)     <pae/>
	I0828 16:52:04.535552   18249 main.go:141] libmachine: (addons-990097)     
	I0828 16:52:04.535560   18249 main.go:141] libmachine: (addons-990097)   </features>
	I0828 16:52:04.535573   18249 main.go:141] libmachine: (addons-990097)   <cpu mode='host-passthrough'>
	I0828 16:52:04.535578   18249 main.go:141] libmachine: (addons-990097)   
	I0828 16:52:04.535587   18249 main.go:141] libmachine: (addons-990097)   </cpu>
	I0828 16:52:04.535595   18249 main.go:141] libmachine: (addons-990097)   <os>
	I0828 16:52:04.535599   18249 main.go:141] libmachine: (addons-990097)     <type>hvm</type>
	I0828 16:52:04.535605   18249 main.go:141] libmachine: (addons-990097)     <boot dev='cdrom'/>
	I0828 16:52:04.535610   18249 main.go:141] libmachine: (addons-990097)     <boot dev='hd'/>
	I0828 16:52:04.535620   18249 main.go:141] libmachine: (addons-990097)     <bootmenu enable='no'/>
	I0828 16:52:04.535627   18249 main.go:141] libmachine: (addons-990097)   </os>
	I0828 16:52:04.535632   18249 main.go:141] libmachine: (addons-990097)   <devices>
	I0828 16:52:04.535640   18249 main.go:141] libmachine: (addons-990097)     <disk type='file' device='cdrom'>
	I0828 16:52:04.535673   18249 main.go:141] libmachine: (addons-990097)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/boot2docker.iso'/>
	I0828 16:52:04.535696   18249 main.go:141] libmachine: (addons-990097)       <target dev='hdc' bus='scsi'/>
	I0828 16:52:04.535707   18249 main.go:141] libmachine: (addons-990097)       <readonly/>
	I0828 16:52:04.535719   18249 main.go:141] libmachine: (addons-990097)     </disk>
	I0828 16:52:04.535743   18249 main.go:141] libmachine: (addons-990097)     <disk type='file' device='disk'>
	I0828 16:52:04.535766   18249 main.go:141] libmachine: (addons-990097)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 16:52:04.535787   18249 main.go:141] libmachine: (addons-990097)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/addons-990097.rawdisk'/>
	I0828 16:52:04.535800   18249 main.go:141] libmachine: (addons-990097)       <target dev='hda' bus='virtio'/>
	I0828 16:52:04.535809   18249 main.go:141] libmachine: (addons-990097)     </disk>
	I0828 16:52:04.535822   18249 main.go:141] libmachine: (addons-990097)     <interface type='network'>
	I0828 16:52:04.535834   18249 main.go:141] libmachine: (addons-990097)       <source network='mk-addons-990097'/>
	I0828 16:52:04.535847   18249 main.go:141] libmachine: (addons-990097)       <model type='virtio'/>
	I0828 16:52:04.535857   18249 main.go:141] libmachine: (addons-990097)     </interface>
	I0828 16:52:04.535873   18249 main.go:141] libmachine: (addons-990097)     <interface type='network'>
	I0828 16:52:04.535886   18249 main.go:141] libmachine: (addons-990097)       <source network='default'/>
	I0828 16:52:04.535900   18249 main.go:141] libmachine: (addons-990097)       <model type='virtio'/>
	I0828 16:52:04.535911   18249 main.go:141] libmachine: (addons-990097)     </interface>
	I0828 16:52:04.535920   18249 main.go:141] libmachine: (addons-990097)     <serial type='pty'>
	I0828 16:52:04.535932   18249 main.go:141] libmachine: (addons-990097)       <target port='0'/>
	I0828 16:52:04.535942   18249 main.go:141] libmachine: (addons-990097)     </serial>
	I0828 16:52:04.535953   18249 main.go:141] libmachine: (addons-990097)     <console type='pty'>
	I0828 16:52:04.535965   18249 main.go:141] libmachine: (addons-990097)       <target type='serial' port='0'/>
	I0828 16:52:04.535984   18249 main.go:141] libmachine: (addons-990097)     </console>
	I0828 16:52:04.536000   18249 main.go:141] libmachine: (addons-990097)     <rng model='virtio'>
	I0828 16:52:04.536015   18249 main.go:141] libmachine: (addons-990097)       <backend model='random'>/dev/random</backend>
	I0828 16:52:04.536025   18249 main.go:141] libmachine: (addons-990097)     </rng>
	I0828 16:52:04.536033   18249 main.go:141] libmachine: (addons-990097)     
	I0828 16:52:04.536041   18249 main.go:141] libmachine: (addons-990097)     
	I0828 16:52:04.536047   18249 main.go:141] libmachine: (addons-990097)   </devices>
	I0828 16:52:04.536052   18249 main.go:141] libmachine: (addons-990097) </domain>
	I0828 16:52:04.536066   18249 main.go:141] libmachine: (addons-990097) 
	I0828 16:52:04.542000   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:8a:92:29 in network default
	I0828 16:52:04.542553   18249 main.go:141] libmachine: (addons-990097) Ensuring networks are active...
	I0828 16:52:04.542572   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:04.543276   18249 main.go:141] libmachine: (addons-990097) Ensuring network default is active
	I0828 16:52:04.543557   18249 main.go:141] libmachine: (addons-990097) Ensuring network mk-addons-990097 is active
	I0828 16:52:04.544054   18249 main.go:141] libmachine: (addons-990097) Getting domain xml...
	I0828 16:52:04.544739   18249 main.go:141] libmachine: (addons-990097) Creating domain...
	I0828 16:52:05.926909   18249 main.go:141] libmachine: (addons-990097) Waiting to get IP...
	I0828 16:52:05.927895   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:05.928293   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:05.928329   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:05.928275   18271 retry.go:31] will retry after 307.43588ms: waiting for machine to come up
	I0828 16:52:06.237778   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:06.238168   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:06.238197   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:06.238118   18271 retry.go:31] will retry after 239.740862ms: waiting for machine to come up
	I0828 16:52:06.479526   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:06.479888   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:06.479911   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:06.479872   18271 retry.go:31] will retry after 313.269043ms: waiting for machine to come up
	I0828 16:52:06.794296   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:06.794785   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:06.794809   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:06.794738   18271 retry.go:31] will retry after 569.173838ms: waiting for machine to come up
	I0828 16:52:07.365385   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:07.365805   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:07.365854   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:07.365801   18271 retry.go:31] will retry after 528.42487ms: waiting for machine to come up
	I0828 16:52:07.896190   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:07.896616   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:07.896641   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:07.896567   18271 retry.go:31] will retry after 860.364887ms: waiting for machine to come up
	I0828 16:52:08.758007   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:08.758436   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:08.758461   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:08.758398   18271 retry.go:31] will retry after 735.816889ms: waiting for machine to come up
	I0828 16:52:09.496298   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:09.496737   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:09.496767   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:09.496707   18271 retry.go:31] will retry after 1.098370398s: waiting for machine to come up
	I0828 16:52:10.596985   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:10.597408   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:10.597437   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:10.597359   18271 retry.go:31] will retry after 1.834335212s: waiting for machine to come up
	I0828 16:52:12.434290   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:12.434611   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:12.434633   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:12.434571   18271 retry.go:31] will retry after 2.041065784s: waiting for machine to come up
	I0828 16:52:14.477426   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:14.477916   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:14.477948   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:14.477861   18271 retry.go:31] will retry after 1.984370117s: waiting for machine to come up
	I0828 16:52:16.464891   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:16.465274   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:16.465295   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:16.465230   18271 retry.go:31] will retry after 3.029154804s: waiting for machine to come up
	I0828 16:52:19.496261   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:19.496603   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:19.496625   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:19.496589   18271 retry.go:31] will retry after 3.151315591s: waiting for machine to come up
	I0828 16:52:22.651764   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:22.652112   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:22.652134   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:22.652073   18271 retry.go:31] will retry after 4.012346275s: waiting for machine to come up
	I0828 16:52:26.667962   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:26.668404   18249 main.go:141] libmachine: (addons-990097) Found IP for machine: 192.168.39.195
	I0828 16:52:26.668422   18249 main.go:141] libmachine: (addons-990097) Reserving static IP address...
	I0828 16:52:26.668433   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has current primary IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:26.668824   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find host DHCP lease matching {name: "addons-990097", mac: "52:54:00:36:9e:33", ip: "192.168.39.195"} in network mk-addons-990097
	I0828 16:52:26.740976   18249 main.go:141] libmachine: (addons-990097) DBG | Getting to WaitForSSH function...
	I0828 16:52:26.741009   18249 main.go:141] libmachine: (addons-990097) Reserved static IP address: 192.168.39.195
	I0828 16:52:26.741023   18249 main.go:141] libmachine: (addons-990097) Waiting for SSH to be available...
	I0828 16:52:26.743441   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:26.743738   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097
	I0828 16:52:26.743775   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find defined IP address of network mk-addons-990097 interface with MAC address 52:54:00:36:9e:33
	I0828 16:52:26.743951   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH client type: external
	I0828 16:52:26.743968   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa (-rw-------)
	I0828 16:52:26.743999   18249 main.go:141] libmachine: (addons-990097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 16:52:26.744010   18249 main.go:141] libmachine: (addons-990097) DBG | About to run SSH command:
	I0828 16:52:26.744026   18249 main.go:141] libmachine: (addons-990097) DBG | exit 0
	I0828 16:52:26.754106   18249 main.go:141] libmachine: (addons-990097) DBG | SSH cmd err, output: exit status 255: 
	I0828 16:52:26.754130   18249 main.go:141] libmachine: (addons-990097) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0828 16:52:26.754137   18249 main.go:141] libmachine: (addons-990097) DBG | command : exit 0
	I0828 16:52:26.754143   18249 main.go:141] libmachine: (addons-990097) DBG | err     : exit status 255
	I0828 16:52:26.754151   18249 main.go:141] libmachine: (addons-990097) DBG | output  : 
	I0828 16:52:29.754760   18249 main.go:141] libmachine: (addons-990097) DBG | Getting to WaitForSSH function...
	I0828 16:52:29.757068   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.757372   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:29.757400   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.757503   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH client type: external
	I0828 16:52:29.757540   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa (-rw-------)
	I0828 16:52:29.757562   18249 main.go:141] libmachine: (addons-990097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 16:52:29.757572   18249 main.go:141] libmachine: (addons-990097) DBG | About to run SSH command:
	I0828 16:52:29.757582   18249 main.go:141] libmachine: (addons-990097) DBG | exit 0
	I0828 16:52:29.877937   18249 main.go:141] libmachine: (addons-990097) DBG | SSH cmd err, output: <nil>: 
	I0828 16:52:29.878225   18249 main.go:141] libmachine: (addons-990097) KVM machine creation complete!
	I0828 16:52:29.878543   18249 main.go:141] libmachine: (addons-990097) Calling .GetConfigRaw
	I0828 16:52:29.879088   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:29.879264   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:29.879423   18249 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 16:52:29.879439   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:29.880692   18249 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 16:52:29.880710   18249 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 16:52:29.880719   18249 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 16:52:29.880732   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:29.882838   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.883224   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:29.883254   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.883344   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:29.883507   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.883658   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.883823   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:29.884002   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:29.884174   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:29.884185   18249 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 16:52:29.985509   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 16:52:29.985528   18249 main.go:141] libmachine: Detecting the provisioner...
	I0828 16:52:29.985535   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:29.988176   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.988502   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:29.988544   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.988718   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:29.988926   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.989088   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.989208   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:29.989336   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:29.989559   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:29.989571   18249 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 16:52:30.090732   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 16:52:30.090818   18249 main.go:141] libmachine: found compatible host: buildroot
	I0828 16:52:30.090830   18249 main.go:141] libmachine: Provisioning with buildroot...
	I0828 16:52:30.090838   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:30.091074   18249 buildroot.go:166] provisioning hostname "addons-990097"
	I0828 16:52:30.091095   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:30.091265   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.094119   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.094571   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.094674   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.094784   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.094970   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.095160   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.095304   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.095507   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.095700   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.095717   18249 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-990097 && echo "addons-990097" | sudo tee /etc/hostname
	I0828 16:52:30.212118   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-990097
	
	I0828 16:52:30.212145   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.214848   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.215331   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.215363   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.215707   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.215913   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.216104   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.216244   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.216447   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.216630   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.216653   18249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-990097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-990097/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-990097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 16:52:30.326941   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 16:52:30.326969   18249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 16:52:30.326993   18249 buildroot.go:174] setting up certificates
	I0828 16:52:30.327005   18249 provision.go:84] configureAuth start
	I0828 16:52:30.327014   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:30.327328   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:30.330236   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.330668   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.330698   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.330848   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.332951   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.333214   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.333255   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.333377   18249 provision.go:143] copyHostCerts
	I0828 16:52:30.333453   18249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 16:52:30.333574   18249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 16:52:30.333649   18249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 16:52:30.333709   18249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.addons-990097 san=[127.0.0.1 192.168.39.195 addons-990097 localhost minikube]
	I0828 16:52:30.457282   18249 provision.go:177] copyRemoteCerts
	I0828 16:52:30.457342   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 16:52:30.457365   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.460211   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.460550   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.460584   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.460756   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.460951   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.461115   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.461336   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:30.544126   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 16:52:30.567154   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0828 16:52:30.592366   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 16:52:30.617237   18249 provision.go:87] duration metric: took 290.219862ms to configureAuth
	I0828 16:52:30.617267   18249 buildroot.go:189] setting minikube options for container-runtime
	I0828 16:52:30.617448   18249 config.go:182] Loaded profile config "addons-990097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 16:52:30.617548   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.619914   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.620221   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.620254   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.620425   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.620640   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.620783   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.620914   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.621107   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.621256   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.621270   18249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 16:52:30.848003   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 16:52:30.848031   18249 main.go:141] libmachine: Checking connection to Docker...
	I0828 16:52:30.848042   18249 main.go:141] libmachine: (addons-990097) Calling .GetURL
	I0828 16:52:30.849229   18249 main.go:141] libmachine: (addons-990097) DBG | Using libvirt version 6000000
	I0828 16:52:30.851198   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.851502   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.851525   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.851678   18249 main.go:141] libmachine: Docker is up and running!
	I0828 16:52:30.851690   18249 main.go:141] libmachine: Reticulating splines...
	I0828 16:52:30.851696   18249 client.go:171] duration metric: took 27.21260345s to LocalClient.Create
	I0828 16:52:30.851716   18249 start.go:167] duration metric: took 27.212664809s to libmachine.API.Create "addons-990097"
	I0828 16:52:30.851725   18249 start.go:293] postStartSetup for "addons-990097" (driver="kvm2")
	I0828 16:52:30.851734   18249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 16:52:30.851750   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:30.851973   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 16:52:30.851995   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.853964   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.854285   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.854301   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.854478   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.854647   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.854805   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.854935   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:30.935753   18249 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 16:52:30.939610   18249 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 16:52:30.939637   18249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 16:52:30.939732   18249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 16:52:30.939770   18249 start.go:296] duration metric: took 88.03849ms for postStartSetup
	I0828 16:52:30.939814   18249 main.go:141] libmachine: (addons-990097) Calling .GetConfigRaw
	I0828 16:52:30.940381   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:30.942790   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.943103   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.943132   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.943312   18249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/config.json ...
	I0828 16:52:30.943514   18249 start.go:128] duration metric: took 27.323344868s to createHost
	I0828 16:52:30.943546   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.945603   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.945953   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.945978   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.946156   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.946323   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.946607   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.946786   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.946957   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.947128   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.947143   18249 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 16:52:31.050660   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724863951.031106642
	
	I0828 16:52:31.050686   18249 fix.go:216] guest clock: 1724863951.031106642
	I0828 16:52:31.050696   18249 fix.go:229] Guest: 2024-08-28 16:52:31.031106642 +0000 UTC Remote: 2024-08-28 16:52:30.943527716 +0000 UTC m=+27.423947828 (delta=87.578926ms)
	I0828 16:52:31.050749   18249 fix.go:200] guest clock delta is within tolerance: 87.578926ms
	I0828 16:52:31.050759   18249 start.go:83] releasing machines lock for "addons-990097", held for 27.430678011s
	I0828 16:52:31.050790   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.051040   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:31.053422   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.053797   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:31.053831   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.053954   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.054408   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.054525   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.054615   18249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 16:52:31.054667   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:31.054710   18249 ssh_runner.go:195] Run: cat /version.json
	I0828 16:52:31.054729   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:31.057139   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057472   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057561   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:31.057604   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057752   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:31.057882   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:31.057908   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057911   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:31.058061   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:31.058069   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:31.058230   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:31.058334   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:31.058301   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:31.058460   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:31.130356   18249 ssh_runner.go:195] Run: systemctl --version
	I0828 16:52:31.176423   18249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 16:52:31.331223   18249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 16:52:31.337047   18249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 16:52:31.337126   18249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 16:52:31.352067   18249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 16:52:31.352090   18249 start.go:495] detecting cgroup driver to use...
	I0828 16:52:31.352154   18249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 16:52:31.366292   18249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 16:52:31.378875   18249 docker.go:217] disabling cri-docker service (if available) ...
	I0828 16:52:31.378945   18249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 16:52:31.391391   18249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 16:52:31.403829   18249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 16:52:31.515593   18249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 16:52:31.658525   18249 docker.go:233] disabling docker service ...
	I0828 16:52:31.658598   18249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 16:52:31.672788   18249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 16:52:31.684923   18249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 16:52:31.832671   18249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 16:52:31.955950   18249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 16:52:31.968509   18249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 16:52:31.985170   18249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 16:52:31.985222   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:31.994290   18249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 16:52:31.994356   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.003644   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.012976   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.022206   18249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 16:52:32.031981   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.041468   18249 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.056996   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.066128   18249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 16:52:32.074610   18249 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 16:52:32.074673   18249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 16:52:32.086779   18249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 16:52:32.095844   18249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:32.217079   18249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 16:52:32.305084   18249 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 16:52:32.305166   18249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 16:52:32.309450   18249 start.go:563] Will wait 60s for crictl version
	I0828 16:52:32.309525   18249 ssh_runner.go:195] Run: which crictl
	I0828 16:52:32.312948   18249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 16:52:32.349653   18249 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 16:52:32.349768   18249 ssh_runner.go:195] Run: crio --version
	I0828 16:52:32.374953   18249 ssh_runner.go:195] Run: crio --version
	I0828 16:52:32.403065   18249 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 16:52:32.404404   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:32.406839   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:32.407142   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:32.407172   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:32.407345   18249 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 16:52:32.411258   18249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 16:52:32.422553   18249 kubeadm.go:883] updating cluster {Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 16:52:32.422662   18249 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 16:52:32.422725   18249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 16:52:32.452295   18249 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 16:52:32.452389   18249 ssh_runner.go:195] Run: which lz4
	I0828 16:52:32.455957   18249 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 16:52:32.459683   18249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 16:52:32.459715   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 16:52:33.619457   18249 crio.go:462] duration metric: took 1.163529047s to copy over tarball
	I0828 16:52:33.619537   18249 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 16:52:35.728451   18249 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.108883425s)
	I0828 16:52:35.728489   18249 crio.go:469] duration metric: took 2.108993771s to extract the tarball
	I0828 16:52:35.728498   18249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 16:52:35.764177   18249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 16:52:35.805986   18249 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 16:52:35.806013   18249 cache_images.go:84] Images are preloaded, skipping loading
	I0828 16:52:35.806024   18249 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.31.0 crio true true} ...
	I0828 16:52:35.806169   18249 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-990097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 16:52:35.806256   18249 ssh_runner.go:195] Run: crio config
	I0828 16:52:35.847424   18249 cni.go:84] Creating CNI manager for ""
	I0828 16:52:35.847444   18249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 16:52:35.847453   18249 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 16:52:35.847477   18249 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-990097 NodeName:addons-990097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 16:52:35.847617   18249 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-990097"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 16:52:35.847688   18249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 16:52:35.857307   18249 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 16:52:35.857386   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 16:52:35.866414   18249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0828 16:52:35.882622   18249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 16:52:35.898146   18249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0828 16:52:35.913810   18249 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I0828 16:52:35.917387   18249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 16:52:35.928840   18249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:36.068112   18249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 16:52:36.084575   18249 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097 for IP: 192.168.39.195
	I0828 16:52:36.084599   18249 certs.go:194] generating shared ca certs ...
	I0828 16:52:36.084619   18249 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.084764   18249 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 16:52:36.178723   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt ...
	I0828 16:52:36.178750   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt: {Name:mkca0e9fa435263e5e1973904de7411404a3b5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.178894   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key ...
	I0828 16:52:36.178904   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key: {Name:mke8d9e9bf1fb5b7a824f6128a8a0000adba5a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.178971   18249 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 16:52:36.394826   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt ...
	I0828 16:52:36.394851   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt: {Name:mk69004c7e13f3376a06f0abafef4bde08b0d3e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.395002   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key ...
	I0828 16:52:36.395013   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key: {Name:mk5411c4aa0dbd29b19b8133f87fa65318c7ad4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.395070   18249 certs.go:256] generating profile certs ...
	I0828 16:52:36.395115   18249 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.key
	I0828 16:52:36.395137   18249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt with IP's: []
	I0828 16:52:36.439668   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt ...
	I0828 16:52:36.439694   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: {Name:mk453035261c38191e0ffde93aa6fa8d406cfb43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.439845   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.key ...
	I0828 16:52:36.439856   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.key: {Name:mkb125df58df3f8011bf26153ac05fdbffab3c48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.439917   18249 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd
	I0828 16:52:36.439934   18249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I0828 16:52:36.539648   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd ...
	I0828 16:52:36.539677   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd: {Name:mk71f54c0b4de61e9c2536a122a940b588dc9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.539818   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd ...
	I0828 16:52:36.539830   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd: {Name:mk45632fbbb3bbcb64891cfc4bf3dbd6f6b7d794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.539890   18249 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt
	I0828 16:52:36.539962   18249 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key
	I0828 16:52:36.540013   18249 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key
	I0828 16:52:36.540031   18249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt with IP's: []
	I0828 16:52:36.667048   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt ...
	I0828 16:52:36.667076   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt: {Name:mkd4b5d49bf60b646d45ef076f74b004c8164a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.667220   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key ...
	I0828 16:52:36.667230   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key: {Name:mk17bb6cc5d80faf4d912b3341e01d7aaac69711 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.667389   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 16:52:36.667426   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 16:52:36.667452   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 16:52:36.667474   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 16:52:36.668075   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 16:52:36.690924   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 16:52:36.712111   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 16:52:36.733708   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 16:52:36.764815   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0828 16:52:36.792036   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 16:52:36.815658   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 16:52:36.836525   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 16:52:36.857449   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 16:52:36.878273   18249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 16:52:36.893346   18249 ssh_runner.go:195] Run: openssl version
	I0828 16:52:36.899004   18249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 16:52:36.909101   18249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:36.913722   18249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:36.913785   18249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:36.919726   18249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 16:52:36.930086   18249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 16:52:36.933924   18249 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 16:52:36.933973   18249 kubeadm.go:392] StartCluster: {Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:52:36.934057   18249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 16:52:36.934128   18249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 16:52:36.968169   18249 cri.go:89] found id: ""
	I0828 16:52:36.968234   18249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 16:52:36.977317   18249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 16:52:36.985866   18249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 16:52:36.994431   18249 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 16:52:36.994459   18249 kubeadm.go:157] found existing configuration files:
	
	I0828 16:52:36.994509   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 16:52:37.004030   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 16:52:37.004090   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 16:52:37.012639   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 16:52:37.020830   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 16:52:37.020889   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 16:52:37.029469   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 16:52:37.037402   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 16:52:37.037462   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 16:52:37.045618   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 16:52:37.053640   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 16:52:37.053694   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 16:52:37.061952   18249 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 16:52:37.112124   18249 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 16:52:37.112242   18249 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 16:52:37.208201   18249 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 16:52:37.208348   18249 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 16:52:37.208461   18249 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 16:52:37.215232   18249 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 16:52:37.218733   18249 out.go:235]   - Generating certificates and keys ...
	I0828 16:52:37.218826   18249 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 16:52:37.219027   18249 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 16:52:37.494799   18249 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 16:52:37.692765   18249 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 16:52:37.856293   18249 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 16:52:38.009127   18249 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 16:52:38.187901   18249 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 16:52:38.188087   18249 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-990097 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0828 16:52:38.477231   18249 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 16:52:38.477411   18249 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-990097 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0828 16:52:38.539600   18249 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 16:52:39.008399   18249 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 16:52:39.328471   18249 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 16:52:39.328600   18249 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 16:52:39.560006   18249 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 16:52:39.701891   18249 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 16:52:39.854713   18249 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 16:52:39.961910   18249 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 16:52:40.053380   18249 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 16:52:40.053922   18249 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 16:52:40.056435   18249 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 16:52:40.058106   18249 out.go:235]   - Booting up control plane ...
	I0828 16:52:40.058200   18249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 16:52:40.058271   18249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 16:52:40.058614   18249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 16:52:40.072832   18249 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 16:52:40.080336   18249 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 16:52:40.080381   18249 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 16:52:40.199027   18249 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 16:52:40.199152   18249 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 16:52:40.701214   18249 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.364407ms
	I0828 16:52:40.701332   18249 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 16:52:45.701403   18249 kubeadm.go:310] [api-check] The API server is healthy after 5.001374073s
	I0828 16:52:45.711899   18249 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 16:52:45.729058   18249 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 16:52:45.759777   18249 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 16:52:45.759972   18249 kubeadm.go:310] [mark-control-plane] Marking the node addons-990097 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 16:52:45.773435   18249 kubeadm.go:310] [bootstrap-token] Using token: m82lde.zyra1pfrkjoxeehr
	I0828 16:52:45.775077   18249 out.go:235]   - Configuring RBAC rules ...
	I0828 16:52:45.775231   18249 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 16:52:45.781955   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 16:52:45.791540   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 16:52:45.798883   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 16:52:45.803511   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 16:52:45.808700   18249 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 16:52:46.106541   18249 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 16:52:46.534310   18249 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 16:52:47.106029   18249 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 16:52:47.107541   18249 kubeadm.go:310] 
	I0828 16:52:47.107598   18249 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 16:52:47.107633   18249 kubeadm.go:310] 
	I0828 16:52:47.107764   18249 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 16:52:47.107778   18249 kubeadm.go:310] 
	I0828 16:52:47.107809   18249 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 16:52:47.107871   18249 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 16:52:47.107961   18249 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 16:52:47.107982   18249 kubeadm.go:310] 
	I0828 16:52:47.108056   18249 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 16:52:47.108065   18249 kubeadm.go:310] 
	I0828 16:52:47.108133   18249 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 16:52:47.108140   18249 kubeadm.go:310] 
	I0828 16:52:47.108179   18249 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 16:52:47.108239   18249 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 16:52:47.108335   18249 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 16:52:47.108350   18249 kubeadm.go:310] 
	I0828 16:52:47.108499   18249 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 16:52:47.108627   18249 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 16:52:47.108638   18249 kubeadm.go:310] 
	I0828 16:52:47.108765   18249 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m82lde.zyra1pfrkjoxeehr \
	I0828 16:52:47.108914   18249 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 16:52:47.108948   18249 kubeadm.go:310] 	--control-plane 
	I0828 16:52:47.108962   18249 kubeadm.go:310] 
	I0828 16:52:47.109095   18249 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 16:52:47.109106   18249 kubeadm.go:310] 
	I0828 16:52:47.109197   18249 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m82lde.zyra1pfrkjoxeehr \
	I0828 16:52:47.109291   18249 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 16:52:47.110506   18249 kubeadm.go:310] W0828 16:52:37.095179     808 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 16:52:47.110904   18249 kubeadm.go:310] W0828 16:52:37.096135     808 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 16:52:47.111022   18249 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 16:52:47.111047   18249 cni.go:84] Creating CNI manager for ""
	I0828 16:52:47.111061   18249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 16:52:47.113714   18249 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 16:52:47.114865   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 16:52:47.125045   18249 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 16:52:47.141868   18249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 16:52:47.141994   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:47.142013   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-990097 minikube.k8s.io/updated_at=2024_08_28T16_52_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=addons-990097 minikube.k8s.io/primary=true
	I0828 16:52:47.167583   18249 ops.go:34] apiserver oom_adj: -16
	I0828 16:52:47.253359   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:47.754084   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:48.254277   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:48.754104   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:49.254023   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:49.753456   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:50.254174   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:50.753691   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:51.254102   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:51.754161   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:51.862424   18249 kubeadm.go:1113] duration metric: took 4.720462069s to wait for elevateKubeSystemPrivileges
	I0828 16:52:51.862469   18249 kubeadm.go:394] duration metric: took 14.928497866s to StartCluster
	I0828 16:52:51.862492   18249 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:51.862622   18249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 16:52:51.863098   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:51.863295   18249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0828 16:52:51.863324   18249 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 16:52:51.863367   18249 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0828 16:52:51.863461   18249 addons.go:69] Setting default-storageclass=true in profile "addons-990097"
	I0828 16:52:51.863473   18249 addons.go:69] Setting registry=true in profile "addons-990097"
	I0828 16:52:51.863476   18249 addons.go:69] Setting metrics-server=true in profile "addons-990097"
	I0828 16:52:51.863499   18249 addons.go:234] Setting addon registry=true in "addons-990097"
	I0828 16:52:51.863492   18249 addons.go:69] Setting cloud-spanner=true in profile "addons-990097"
	I0828 16:52:51.863506   18249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-990097"
	I0828 16:52:51.863519   18249 addons.go:234] Setting addon metrics-server=true in "addons-990097"
	I0828 16:52:51.863529   18249 addons.go:234] Setting addon cloud-spanner=true in "addons-990097"
	I0828 16:52:51.863531   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863549   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863561   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863562   18249 config.go:182] Loaded profile config "addons-990097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 16:52:51.863607   18249 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-990097"
	I0828 16:52:51.863654   18249 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-990097"
	I0828 16:52:51.863678   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863908   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.863921   18249 addons.go:69] Setting ingress=true in profile "addons-990097"
	I0828 16:52:51.863926   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.863933   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.863938   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.863948   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.863953   18249 addons.go:234] Setting addon ingress=true in "addons-990097"
	I0828 16:52:51.863964   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.863982   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863460   18249 addons.go:69] Setting yakd=true in profile "addons-990097"
	I0828 16:52:51.864018   18249 addons.go:69] Setting ingress-dns=true in profile "addons-990097"
	I0828 16:52:51.864030   18249 addons.go:69] Setting storage-provisioner=true in profile "addons-990097"
	I0828 16:52:51.864038   18249 addons.go:234] Setting addon yakd=true in "addons-990097"
	I0828 16:52:51.864041   18249 addons.go:234] Setting addon ingress-dns=true in "addons-990097"
	I0828 16:52:51.864048   18249 addons.go:234] Setting addon storage-provisioner=true in "addons-990097"
	I0828 16:52:51.864050   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864057   18249 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-990097"
	I0828 16:52:51.864061   18249 addons.go:69] Setting gcp-auth=true in profile "addons-990097"
	I0828 16:52:51.864068   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864068   18249 addons.go:69] Setting helm-tiller=true in profile "addons-990097"
	I0828 16:52:51.864081   18249 mustload.go:65] Loading cluster: addons-990097
	I0828 16:52:51.864058   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864087   18249 addons.go:234] Setting addon helm-tiller=true in "addons-990097"
	I0828 16:52:51.864092   18249 addons.go:69] Setting volumesnapshots=true in profile "addons-990097"
	I0828 16:52:51.864087   18249 addons.go:69] Setting volcano=true in profile "addons-990097"
	I0828 16:52:51.864105   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864108   18249 addons.go:234] Setting addon volumesnapshots=true in "addons-990097"
	I0828 16:52:51.863468   18249 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-990097"
	I0828 16:52:51.864138   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864111   18249 addons.go:234] Setting addon volcano=true in "addons-990097"
	I0828 16:52:51.864171   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864297   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864336   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864434   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864142   18249 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-990097"
	I0828 16:52:51.864465   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864046   18249 addons.go:69] Setting inspektor-gadget=true in profile "addons-990097"
	I0828 16:52:51.864493   18249 addons.go:234] Setting addon inspektor-gadget=true in "addons-990097"
	I0828 16:52:51.864543   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864568   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864591   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864798   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864877   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864896   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864905   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864929   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864937   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864955   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864572   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864983   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864081   18249 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-990097"
	I0828 16:52:51.865148   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.865166   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.865240   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864545   18249 config.go:182] Loaded profile config "addons-990097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 16:52:51.865295   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.865352   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.865431   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.865521   18249 out.go:177] * Verifying Kubernetes components...
	I0828 16:52:51.867093   18249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:51.885199   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0828 16:52:51.885477   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0828 16:52:51.885492   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33793
	I0828 16:52:51.885750   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.885755   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0828 16:52:51.885989   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.886219   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.886558   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.886581   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.886580   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.886688   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.886708   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.886724   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.886737   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.887264   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887324   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887350   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.887362   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.887907   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.887933   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887944   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.887912   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887987   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.888013   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I0828 16:52:51.889234   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0828 16:52:51.890397   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890420   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890438   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.890452   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.890533   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890558   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.890684   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890713   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.891153   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.891189   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.892730   18249 addons.go:234] Setting addon default-storageclass=true in "addons-990097"
	I0828 16:52:51.892913   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.893285   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.893322   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.894924   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.894976   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.895458   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.895475   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.895521   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.895542   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.895836   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.895884   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.896367   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.896400   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.896408   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.896431   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.920845   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46817
	I0828 16:52:51.921517   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.922235   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.922257   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.922922   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.923553   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.923595   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.928048   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0828 16:52:51.928224   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0828 16:52:51.928543   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.928629   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.928995   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.929011   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.929139   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.929150   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.929913   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.930496   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.930519   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.930739   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0828 16:52:51.930776   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0828 16:52:51.931018   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.931228   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.931311   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.931596   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.931633   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.932148   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.932168   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.932316   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.932335   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.932583   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.932657   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.933177   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.933214   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.933573   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.934348   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
	I0828 16:52:51.934983   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.935496   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.935514   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.935540   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33497
	I0828 16:52:51.935941   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.936141   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.936211   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.936686   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.936702   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.937184   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.937607   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.937779   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.937810   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.938264   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.939007   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.939053   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.940198   18249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 16:52:51.941257   18249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:51.941275   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 16:52:51.941294   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.945245   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.945869   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.945889   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.946114   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.946297   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.946469   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.947368   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.948243   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42589
	I0828 16:52:51.948630   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.949142   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.949159   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.949494   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.949670   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.951300   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.953224   18249 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0828 16:52:51.954643   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 16:52:51.954663   18249 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 16:52:51.954691   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.958105   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.958534   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.958558   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.960564   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0828 16:52:51.960712   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43619
	I0828 16:52:51.960811   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.961092   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.961160   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0828 16:52:51.961463   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.961645   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.962144   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.962212   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.962501   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.962836   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.962852   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.962967   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.962980   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.963302   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.963364   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.963916   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.963951   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.964787   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.964813   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.966540   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.966566   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.966978   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.967204   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.969078   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.970741   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0828 16:52:51.971825   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0828 16:52:51.973044   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0828 16:52:51.973220   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I0828 16:52:51.973630   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.974106   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.974125   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.974525   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.974714   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.975169   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39457
	I0828 16:52:51.975891   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.976592   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.976607   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.976669   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.976985   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.977259   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.977312   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46789
	I0828 16:52:51.977724   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.978015   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0828 16:52:51.978118   18249 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0828 16:52:51.978190   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.978212   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.978519   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.978701   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.979345   18249 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0828 16:52:51.979360   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0828 16:52:51.979379   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.980554   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0828 16:52:51.980864   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.981133   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42397
	I0828 16:52:51.981800   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.982285   18249 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0828 16:52:51.982336   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0828 16:52:51.982805   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.982823   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.983085   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.983524   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.983649   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.983860   18249 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0828 16:52:51.983880   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0828 16:52:51.983898   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.984188   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.984214   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.984253   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.984424   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0828 16:52:51.984488   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.985059   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0828 16:52:51.986056   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0828 16:52:51.986113   18249 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0828 16:52:51.986133   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.986876   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.986944   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39637
	I0828 16:52:51.987233   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0828 16:52:51.987277   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.987408   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.988124   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.988172   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0828 16:52:51.988183   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0828 16:52:51.988201   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.988609   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.988624   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.989053   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.989096   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.989270   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.989445   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46785
	I0828 16:52:51.989794   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.989811   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.989923   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.990447   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.990496   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.990539   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0828 16:52:51.990826   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.990852   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.990950   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.990969   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.991161   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.991400   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.991419   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.991421   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.991402   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.991650   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.991758   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.991824   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.992071   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.992286   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.992541   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.992569   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:51.992585   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:51.992850   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.992917   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.992930   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.992959   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:51.992978   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:51.992986   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:51.992997   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:51.993004   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:51.993157   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:51.993194   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:51.993202   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:51.993228   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	W0828 16:52:51.993270   18249 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0828 16:52:51.993367   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.993502   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.993600   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.994634   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0828 16:52:51.994968   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.995300   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.995803   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.995829   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.996150   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.996660   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.996695   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.996700   18249 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0828 16:52:51.998323   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0828 16:52:51.998854   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.999173   18249 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0828 16:52:51.999191   18249 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0828 16:52:51.999209   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.999355   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.999375   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.999733   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.000029   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.000074   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I0828 16:52:52.000535   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.000555   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.000620   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.001158   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.001173   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.001242   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I0828 16:52:52.001533   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.001840   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.001919   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.002585   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.002779   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.003130   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:52.003158   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:52.003646   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.003664   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.003919   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.004173   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.004207   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.004302   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.004721   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.004745   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.004915   18249 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0828 16:52:52.005124   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.005449   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.005556   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I0828 16:52:52.005574   18249 out.go:177]   - Using image docker.io/registry:2.8.3
	I0828 16:52:52.005964   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.006575   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.006739   18249 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 16:52:52.006749   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0828 16:52:52.006762   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.007011   18249 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-990097"
	I0828 16:52:52.007047   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:52.007210   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.007223   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.007395   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:52.007635   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.007947   18249 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0828 16:52:52.008055   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.008153   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:52.008529   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.009065   18249 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0828 16:52:52.009079   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0828 16:52:52.009091   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.010799   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.011239   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.011257   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.011423   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.011668   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.011806   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.011928   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.012452   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.013295   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.013770   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.013865   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.013823   18249 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0828 16:52:52.013979   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.014280   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.014407   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.014585   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.015267   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0828 16:52:52.015319   18249 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0828 16:52:52.015347   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.018874   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43185
	I0828 16:52:52.019066   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.019382   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.019521   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.019539   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.019711   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.019861   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.020082   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.020241   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.020251   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.020261   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.020835   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.021022   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.021132   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0828 16:52:52.021489   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.022124   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.022148   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.022508   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.022715   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.022934   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.024013   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0828 16:52:52.024558   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.026046   18249 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0828 16:52:52.026047   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:52:52.027328   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:52:52.027344   18249 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 16:52:52.027383   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0828 16:52:52.027410   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.028651   18249 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 16:52:52.028667   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0828 16:52:52.028681   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.031130   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.031559   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.031573   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.031751   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.031908   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.032036   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.032165   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.032716   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.033159   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I0828 16:52:52.033303   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.033338   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.033379   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.033428   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I0828 16:52:52.033563   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.033754   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.033781   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.033785   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.034222   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.034240   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.034224   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.034253   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.034269   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.034593   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.034635   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.034793   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.035047   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:52.035083   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:52.036108   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.036365   18249 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:52.036381   18249 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 16:52:52.036396   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.039229   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.039626   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.039642   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.039793   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.039933   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.040034   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.040110   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	W0828 16:52:52.050832   18249 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52966->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.050853   18249 retry.go:31] will retry after 265.877478ms: ssh: handshake failed: read tcp 192.168.39.1:52966->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.065458   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0828 16:52:52.065895   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.066365   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.066389   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.066695   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.066934   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.068686   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.070267   18249 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0828 16:52:52.071813   18249 out.go:177]   - Using image docker.io/busybox:stable
	I0828 16:52:52.072975   18249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 16:52:52.073002   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0828 16:52:52.073024   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.076493   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.076991   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.077021   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.077115   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.077290   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.077439   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.077557   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	W0828 16:52:52.078345   18249 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52982->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.078374   18249 retry.go:31] will retry after 279.535479ms: ssh: handshake failed: read tcp 192.168.39.1:52982->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.457264   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0828 16:52:52.472106   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0828 16:52:52.472127   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0828 16:52:52.472898   18249 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0828 16:52:52.472911   18249 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0828 16:52:52.477184   18249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 16:52:52.477383   18249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0828 16:52:52.564015   18249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0828 16:52:52.564048   18249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0828 16:52:52.575756   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 16:52:52.575777   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0828 16:52:52.585811   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:52.590531   18249 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0828 16:52:52.590558   18249 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0828 16:52:52.594784   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 16:52:52.613114   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0828 16:52:52.613137   18249 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0828 16:52:52.618849   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 16:52:52.630514   18249 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0828 16:52:52.630548   18249 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0828 16:52:52.680492   18249 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0828 16:52:52.680511   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0828 16:52:52.683692   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 16:52:52.711921   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0828 16:52:52.711950   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0828 16:52:52.758918   18249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0828 16:52:52.758942   18249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0828 16:52:52.772563   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 16:52:52.772585   18249 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 16:52:52.783118   18249 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0828 16:52:52.783140   18249 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0828 16:52:52.784569   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:52.809813   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0828 16:52:52.809848   18249 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0828 16:52:52.826609   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0828 16:52:52.836825   18249 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0828 16:52:52.836855   18249 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0828 16:52:52.857767   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0828 16:52:52.857793   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0828 16:52:52.867452   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 16:52:52.903663   18249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0828 16:52:52.903735   18249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0828 16:52:52.914976   18249 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0828 16:52:52.914995   18249 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0828 16:52:52.980163   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 16:52:52.980191   18249 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 16:52:52.984211   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0828 16:52:52.984228   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0828 16:52:53.040803   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0828 16:52:53.040824   18249 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0828 16:52:53.043499   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0828 16:52:53.043517   18249 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0828 16:52:53.059538   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0828 16:52:53.066983   18249 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0828 16:52:53.067015   18249 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0828 16:52:53.136171   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0828 16:52:53.136204   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0828 16:52:53.144640   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 16:52:53.187366   18249 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:53.187394   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0828 16:52:53.212893   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0828 16:52:53.212913   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0828 16:52:53.235809   18249 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0828 16:52:53.235832   18249 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0828 16:52:53.288679   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0828 16:52:53.288698   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0828 16:52:53.385998   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:53.397529   18249 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0828 16:52:53.397559   18249 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0828 16:52:53.399651   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0828 16:52:53.466548   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0828 16:52:53.466578   18249 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0828 16:52:53.581666   18249 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 16:52:53.581691   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0828 16:52:53.691064   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0828 16:52:53.691083   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0828 16:52:53.853240   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 16:52:53.941644   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0828 16:52:53.941669   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0828 16:52:54.272844   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 16:52:54.272880   18249 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0828 16:52:54.495971   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 16:52:54.756169   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.298876818s)
	I0828 16:52:54.756225   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:54.756239   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:54.756244   18249 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.278834015s)
	I0828 16:52:54.756268   18249 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0828 16:52:54.756332   18249 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.279127216s)
	I0828 16:52:54.756551   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:54.756572   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:54.756589   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:54.756597   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:54.757015   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:54.757050   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:54.757059   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:54.757383   18249 node_ready.go:35] waiting up to 6m0s for node "addons-990097" to be "Ready" ...
	I0828 16:52:54.786124   18249 node_ready.go:49] node "addons-990097" has status "Ready":"True"
	I0828 16:52:54.786149   18249 node_ready.go:38] duration metric: took 28.747442ms for node "addons-990097" to be "Ready" ...
	I0828 16:52:54.786161   18249 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 16:52:54.827906   18249 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8gjc6" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:55.293839   18249 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-990097" context rescaled to 1 replicas
	I0828 16:52:55.917518   18249 pod_ready.go:93] pod "coredns-6f6b679f8f-8gjc6" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:55.917551   18249 pod_ready.go:82] duration metric: took 1.089601559s for pod "coredns-6f6b679f8f-8gjc6" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:55.917564   18249 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:57.075627   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.489775878s)
	I0828 16:52:57.075691   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:57.075706   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:57.075965   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:57.075988   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:57.075998   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:57.076007   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:57.077276   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:57.077308   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:57.077327   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:57.979995   18249 pod_ready.go:103] pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:59.035882   18249 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0828 16:52:59.035917   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:59.039427   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.039927   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:59.039958   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.040104   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:59.040296   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:59.040538   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:59.040737   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:59.280183   18249 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0828 16:52:59.327255   18249 addons.go:234] Setting addon gcp-auth=true in "addons-990097"
	I0828 16:52:59.327310   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:59.327726   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:59.327759   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:59.342823   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38311
	I0828 16:52:59.343340   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:59.343791   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:59.343813   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:59.344064   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:59.344682   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:59.344737   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:59.360102   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34741
	I0828 16:52:59.360990   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:59.361500   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:59.361519   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:59.361841   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:59.362016   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:59.363643   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:59.363866   18249 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0828 16:52:59.363888   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:59.366987   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.367482   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:59.367512   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.367772   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:59.367974   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:59.368154   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:59.368303   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:53:00.143087   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.524208814s)
	I0828 16:53:00.143133   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143143   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143179   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.459455766s)
	I0828 16:53:00.143218   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.358630374s)
	I0828 16:53:00.143225   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143234   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143237   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143245   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143279   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.316631996s)
	I0828 16:53:00.143308   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143320   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143325   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.275851965s)
	I0828 16:53:00.143341   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143349   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143439   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143454   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143465   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143477   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143588   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143601   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143603   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143610   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143622   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143642   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143669   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143673   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143678   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143680   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143686   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143689   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143693   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143697   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143705   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143736   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143743   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143875   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143971   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143990   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.144005   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.144007   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.144055   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.144079   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.144094   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.144037   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.144153   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.144353   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.144059   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.145188   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.145203   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.146141   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.146157   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.146170   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.146183   18249 addons.go:475] Verifying addon registry=true in "addons-990097"
	I0828 16:53:00.146206   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.086636827s)
	I0828 16:53:00.146315   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.001646973s)
	I0828 16:53:00.146337   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146350   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146459   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.760428919s)
	W0828 16:53:00.146488   18249 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 16:53:00.146500   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146507   18249 retry.go:31] will retry after 285.495702ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 16:53:00.146512   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146514   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.746824063s)
	I0828 16:53:00.146540   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.146545   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146550   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.146559   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146560   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146618   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146697   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.293422003s)
	I0828 16:53:00.146718   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146857   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147307   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147338   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147345   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147352   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.147359   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147366   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.552552261s)
	I0828 16:53:00.147388   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.147399   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147398   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147422   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147429   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147437   18249 addons.go:475] Verifying addon metrics-server=true in "addons-990097"
	I0828 16:53:00.147458   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147509   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147518   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147526   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.147534   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147545   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147554   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147758   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147784   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147790   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.148382   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.148394   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.148412   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.148414   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.148424   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.148431   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.148445   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.148453   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.148461   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.148468   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.148778   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.148815   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.148823   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.149053   18249 out.go:177] * Verifying registry addon...
	I0828 16:53:00.149082   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.149108   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.149116   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.149124   18249 addons.go:475] Verifying addon ingress=true in "addons-990097"
	I0828 16:53:00.149773   18249 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-990097 service yakd-dashboard -n yakd-dashboard
	
	I0828 16:53:00.151268   18249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0828 16:53:00.151296   18249 out.go:177] * Verifying ingress addon...
	I0828 16:53:00.153273   18249 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0828 16:53:00.166762   18249 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0828 16:53:00.166788   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:00.182165   18249 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0828 16:53:00.182192   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:00.189119   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.189137   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.189552   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.189574   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	W0828 16:53:00.189671   18249 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0828 16:53:00.192266   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.192288   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.192629   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.192650   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.192654   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.432806   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:53:00.441373   18249 pod_ready.go:103] pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:00.616225   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.120200896s)
	I0828 16:53:00.616277   18249 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.252381186s)
	I0828 16:53:00.616290   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.616306   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.616613   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.616635   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.616651   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.616659   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.616960   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.616974   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.616985   18249 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-990097"
	I0828 16:53:00.618208   18249 out.go:177] * Verifying csi-hostpath-driver addon...
	I0828 16:53:00.618221   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:53:00.620074   18249 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0828 16:53:00.620941   18249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0828 16:53:00.621479   18249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0828 16:53:00.621497   18249 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0828 16:53:00.649240   18249 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0828 16:53:00.649265   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:00.666906   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:00.666973   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:00.798819   18249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0828 16:53:00.798846   18249 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0828 16:53:00.965848   18249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 16:53:00.965868   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0828 16:53:01.096603   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 16:53:01.146627   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:01.246922   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:01.247289   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:01.625375   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:01.727635   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:01.728621   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:01.939675   18249 pod_ready.go:98] pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:53:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.195 HostIPs:[{IP:192.168.39
.195}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-28 16:52:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-28 16:52:55 +0000 UTC,FinishedAt:2024-08-28 16:53:00 +0000 UTC,ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37 Started:0xc0015a66a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000c10060} {Name:kube-api-access-gnbll MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000c10070}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0828 16:53:01.939706   18249 pod_ready.go:82] duration metric: took 6.022133006s for pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace to be "Ready" ...
	E0828 16:53:01.939721   18249 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:53:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.195 HostIPs:[{IP:192.168.39.195}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-28 16:52:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-28 16:52:55 +0000 UTC,FinishedAt:2024-08-28 16:53:00 +0000 UTC,ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37 Started:0xc0015a66a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000c10060} {Name:kube-api-access-gnbll MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc000c10070}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0828 16:53:01.939735   18249 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.947681   18249 pod_ready.go:93] pod "etcd-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.947709   18249 pod_ready.go:82] duration metric: took 7.961903ms for pod "etcd-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.947723   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.965179   18249 pod_ready.go:93] pod "kube-apiserver-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.965209   18249 pod_ready.go:82] duration metric: took 17.478027ms for pod "kube-apiserver-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.965223   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.975413   18249 pod_ready.go:93] pod "kube-controller-manager-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.975442   18249 pod_ready.go:82] duration metric: took 10.210377ms for pod "kube-controller-manager-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.975456   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8qj9l" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.989070   18249 pod_ready.go:93] pod "kube-proxy-8qj9l" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.989092   18249 pod_ready.go:82] duration metric: took 13.627304ms for pod "kube-proxy-8qj9l" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.989102   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:02.126567   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:02.155944   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:02.158684   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:02.322404   18249 pod_ready.go:93] pod "kube-scheduler-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:02.322427   18249 pod_ready.go:82] duration metric: took 333.317872ms for pod "kube-scheduler-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:02.322440   18249 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:02.474322   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.04146744s)
	I0828 16:53:02.474395   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.474415   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.474701   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.474716   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.474743   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:02.474804   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.474818   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.475006   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.475026   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.585160   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.48851671s)
	I0828 16:53:02.585206   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.585217   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.585499   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.585553   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.585584   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.585591   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:02.585596   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.585845   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.585864   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.585870   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:02.587678   18249 addons.go:475] Verifying addon gcp-auth=true in "addons-990097"
	I0828 16:53:02.589137   18249 out.go:177] * Verifying gcp-auth addon...
	I0828 16:53:02.590957   18249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0828 16:53:02.611253   18249 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 16:53:02.611280   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:02.625344   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:02.656451   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:02.659296   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:03.096111   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:03.127568   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:03.156535   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:03.158882   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:03.594789   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:03.625961   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:03.655530   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:03.656632   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:04.100416   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:04.202367   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:04.202567   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:04.202579   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:04.332466   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:04.594922   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:04.625960   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:04.654548   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:04.657398   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:05.095212   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:05.127010   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:05.154957   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:05.157414   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:05.600067   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:05.627331   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:05.655666   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:05.658371   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:06.095702   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:06.125685   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:06.166060   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:06.196174   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:06.595324   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:06.625617   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:06.654792   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:06.657272   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:06.827854   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:07.094934   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:07.126052   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:07.155943   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:07.157205   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:07.843759   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:07.843956   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:07.844210   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:07.845956   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:08.094558   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:08.126496   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:08.156387   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:08.158864   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:08.594938   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:08.625675   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:08.654652   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:08.658021   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:08.829775   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:09.095286   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:09.125697   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:09.156180   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:09.157544   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:09.593920   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:09.626336   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:09.655412   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:09.657265   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:10.095098   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:10.126775   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:10.154380   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:10.156565   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:10.595836   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:10.625685   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:10.654838   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:10.657544   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:11.093858   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:11.125963   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:11.155451   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:11.157776   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:11.329080   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:11.594338   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:11.625913   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:11.655531   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:11.657757   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:12.094680   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:12.125074   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:12.156504   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:12.157527   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:12.594657   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:12.625349   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:12.654353   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:12.656983   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:13.094718   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:13.125151   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:13.154331   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:13.156598   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:13.595126   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:13.626873   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:13.654512   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:13.656740   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:13.828160   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:14.094559   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:14.126019   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:14.155228   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:14.158042   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:14.596006   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:14.626608   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:14.656951   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:14.659254   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:15.094812   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:15.125914   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:15.155459   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:15.157532   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:15.595411   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:15.625118   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:15.654905   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:15.656932   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:15.833089   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:16.095434   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:16.125283   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:16.155066   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:16.156964   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:16.594257   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:16.625899   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:16.655321   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:16.658052   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:17.097404   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:17.124748   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:17.155670   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:17.158403   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:17.594954   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:17.625453   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:17.654592   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:17.656593   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:18.095211   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:18.126118   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:18.155697   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:18.156856   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:18.328637   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:18.595104   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:18.625985   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:18.655062   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:18.657082   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:19.094569   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:19.125822   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:19.155202   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:19.157964   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:19.594797   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:19.625854   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:19.655328   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:19.657943   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:20.095529   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:20.125903   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:20.155547   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:20.157641   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:20.329359   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:20.855221   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:20.858381   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:20.859843   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:20.860540   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:21.094959   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:21.129150   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:21.161797   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:21.162220   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:21.594694   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:21.625635   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:21.655280   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:21.657315   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:22.094660   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:22.125891   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:22.473066   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:22.473715   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:22.476586   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:22.595128   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:22.625652   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:22.654993   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:22.658298   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:23.093886   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:23.126139   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:23.156079   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:23.158250   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:23.594455   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:23.625689   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:23.654673   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:23.657362   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:24.095220   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:24.197203   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:24.197523   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:24.197678   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:24.602569   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:24.625733   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:24.654778   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:24.656915   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:24.829081   18249 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:24.829106   18249 pod_ready.go:82] duration metric: took 22.50665926s for pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:24.829114   18249 pod_ready.go:39] duration metric: took 30.042940712s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 16:53:24.829128   18249 api_server.go:52] waiting for apiserver process to appear ...
	I0828 16:53:24.829180   18249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:53:24.846345   18249 api_server.go:72] duration metric: took 32.982988344s to wait for apiserver process to appear ...
	I0828 16:53:24.846376   18249 api_server.go:88] waiting for apiserver healthz status ...
	I0828 16:53:24.846397   18249 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0828 16:53:24.852123   18249 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0828 16:53:24.853689   18249 api_server.go:141] control plane version: v1.31.0
	I0828 16:53:24.853713   18249 api_server.go:131] duration metric: took 7.33084ms to wait for apiserver health ...
	I0828 16:53:24.853721   18249 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 16:53:24.862271   18249 system_pods.go:59] 18 kube-system pods found
	I0828 16:53:24.862300   18249 system_pods.go:61] "coredns-6f6b679f8f-8gjc6" [2d62cafa-b292-4c9e-bd8c-b7cc0523f58d] Running
	I0828 16:53:24.862310   18249 system_pods.go:61] "csi-hostpath-attacher-0" [f3ce9e2b-eab0-43a4-a31d-ce0831b5f168] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 16:53:24.862319   18249 system_pods.go:61] "csi-hostpath-resizer-0" [10b5d1e7-194f-42db-8780-63891a0a8ce0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 16:53:24.862329   18249 system_pods.go:61] "csi-hostpathplugin-mm9lp" [011d90e2-d937-44ec-9158-ea2c1f17b104] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 16:53:24.862334   18249 system_pods.go:61] "etcd-addons-990097" [fe186cf5-5965-4644-bc89-139f3599c0a7] Running
	I0828 16:53:24.862340   18249 system_pods.go:61] "kube-apiserver-addons-990097" [aeab6d72-59c7-47c8-acde-ebe584ab2c71] Running
	I0828 16:53:24.862346   18249 system_pods.go:61] "kube-controller-manager-addons-990097" [b1e65ab0-d778-4964-a2f1-610e4457ec7f] Running
	I0828 16:53:24.862351   18249 system_pods.go:61] "kube-ingress-dns-minikube" [3020f9b2-3535-4950-b84f-5387dcc8f455] Running
	I0828 16:53:24.862357   18249 system_pods.go:61] "kube-proxy-8qj9l" [871ff895-ba0c-47f6-aac2-55e5234d02ac] Running
	I0828 16:53:24.862364   18249 system_pods.go:61] "kube-scheduler-addons-990097" [652d01ae-78cd-4eca-99e1-b0de19bd8b88] Running
	I0828 16:53:24.862376   18249 system_pods.go:61] "metrics-server-84c5f94fbc-s6z6n" [3af617c1-2322-4d0f-af32-35d80eaeaf8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 16:53:24.862382   18249 system_pods.go:61] "nvidia-device-plugin-daemonset-j24tf" [fda32bb5-afc7-4b0f-939f-fe0614025dc2] Running
	I0828 16:53:24.862394   18249 system_pods.go:61] "registry-6fb4cdfc84-95krj" [28ff509c-2b4f-4dbc-ac62-07fa93fce1c0] Running
	I0828 16:53:24.862404   18249 system_pods.go:61] "registry-proxy-ds4qv" [1ab53ee3-0865-49b3-8fd0-7f176587e4d5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 16:53:24.862414   18249 system_pods.go:61] "snapshot-controller-56fcc65765-vzbnc" [0c48e398-eb8d-470d-a253-66ea5ad29759] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.862426   18249 system_pods.go:61] "snapshot-controller-56fcc65765-xbr5f" [f0579b92-dea0-4457-9375-d36a3227a888] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.862432   18249 system_pods.go:61] "storage-provisioner" [21f51c68-9237-4afc-950e-961d7a9d6cf2] Running
	I0828 16:53:24.862438   18249 system_pods.go:61] "tiller-deploy-b48cc5f79-wr7ks" [92061fbc-b8a2-4b6f-9ffa-c0ac60e817ab] Running
	I0828 16:53:24.862447   18249 system_pods.go:74] duration metric: took 8.718746ms to wait for pod list to return data ...
	I0828 16:53:24.862458   18249 default_sa.go:34] waiting for default service account to be created ...
	I0828 16:53:24.864930   18249 default_sa.go:45] found service account: "default"
	I0828 16:53:24.864948   18249 default_sa.go:55] duration metric: took 2.483987ms for default service account to be created ...
	I0828 16:53:24.864954   18249 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 16:53:24.873151   18249 system_pods.go:86] 18 kube-system pods found
	I0828 16:53:24.873179   18249 system_pods.go:89] "coredns-6f6b679f8f-8gjc6" [2d62cafa-b292-4c9e-bd8c-b7cc0523f58d] Running
	I0828 16:53:24.873192   18249 system_pods.go:89] "csi-hostpath-attacher-0" [f3ce9e2b-eab0-43a4-a31d-ce0831b5f168] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 16:53:24.873200   18249 system_pods.go:89] "csi-hostpath-resizer-0" [10b5d1e7-194f-42db-8780-63891a0a8ce0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 16:53:24.873209   18249 system_pods.go:89] "csi-hostpathplugin-mm9lp" [011d90e2-d937-44ec-9158-ea2c1f17b104] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 16:53:24.873217   18249 system_pods.go:89] "etcd-addons-990097" [fe186cf5-5965-4644-bc89-139f3599c0a7] Running
	I0828 16:53:24.873223   18249 system_pods.go:89] "kube-apiserver-addons-990097" [aeab6d72-59c7-47c8-acde-ebe584ab2c71] Running
	I0828 16:53:24.873230   18249 system_pods.go:89] "kube-controller-manager-addons-990097" [b1e65ab0-d778-4964-a2f1-610e4457ec7f] Running
	I0828 16:53:24.873239   18249 system_pods.go:89] "kube-ingress-dns-minikube" [3020f9b2-3535-4950-b84f-5387dcc8f455] Running
	I0828 16:53:24.873246   18249 system_pods.go:89] "kube-proxy-8qj9l" [871ff895-ba0c-47f6-aac2-55e5234d02ac] Running
	I0828 16:53:24.873252   18249 system_pods.go:89] "kube-scheduler-addons-990097" [652d01ae-78cd-4eca-99e1-b0de19bd8b88] Running
	I0828 16:53:24.873261   18249 system_pods.go:89] "metrics-server-84c5f94fbc-s6z6n" [3af617c1-2322-4d0f-af32-35d80eaeaf8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 16:53:24.873267   18249 system_pods.go:89] "nvidia-device-plugin-daemonset-j24tf" [fda32bb5-afc7-4b0f-939f-fe0614025dc2] Running
	I0828 16:53:24.873275   18249 system_pods.go:89] "registry-6fb4cdfc84-95krj" [28ff509c-2b4f-4dbc-ac62-07fa93fce1c0] Running
	I0828 16:53:24.873283   18249 system_pods.go:89] "registry-proxy-ds4qv" [1ab53ee3-0865-49b3-8fd0-7f176587e4d5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 16:53:24.873293   18249 system_pods.go:89] "snapshot-controller-56fcc65765-vzbnc" [0c48e398-eb8d-470d-a253-66ea5ad29759] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.873305   18249 system_pods.go:89] "snapshot-controller-56fcc65765-xbr5f" [f0579b92-dea0-4457-9375-d36a3227a888] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.873311   18249 system_pods.go:89] "storage-provisioner" [21f51c68-9237-4afc-950e-961d7a9d6cf2] Running
	I0828 16:53:24.873319   18249 system_pods.go:89] "tiller-deploy-b48cc5f79-wr7ks" [92061fbc-b8a2-4b6f-9ffa-c0ac60e817ab] Running
	I0828 16:53:24.873330   18249 system_pods.go:126] duration metric: took 8.36895ms to wait for k8s-apps to be running ...
	I0828 16:53:24.873342   18249 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 16:53:24.873397   18249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 16:53:24.891586   18249 system_svc.go:56] duration metric: took 18.235397ms WaitForService to wait for kubelet
	I0828 16:53:24.891614   18249 kubeadm.go:582] duration metric: took 33.028263807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 16:53:24.891635   18249 node_conditions.go:102] verifying NodePressure condition ...
	I0828 16:53:24.895227   18249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 16:53:24.895250   18249 node_conditions.go:123] node cpu capacity is 2
	I0828 16:53:24.895261   18249 node_conditions.go:105] duration metric: took 3.620897ms to run NodePressure ...
	I0828 16:53:24.895272   18249 start.go:241] waiting for startup goroutines ...
	I0828 16:53:25.094459   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:25.125633   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:25.155792   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:25.157753   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:25.595906   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:25.625747   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:25.655075   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:25.658011   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:26.094834   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:26.129755   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:26.155136   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:26.157330   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:26.593981   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:26.625973   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:26.664009   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:26.664214   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:27.095448   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:27.125667   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:27.154619   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:27.157410   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:27.595673   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:27.625374   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:27.655905   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:27.657898   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:28.094619   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:28.128498   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:28.154730   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:28.156969   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:28.595931   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:28.625670   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:28.655499   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:28.659580   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:29.094542   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:29.125191   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:29.154692   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:29.156836   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:29.594830   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:29.625397   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:29.655016   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:29.658369   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:30.095041   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:30.125951   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:30.197156   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:30.197430   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:30.593884   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:30.626012   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:30.655288   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:30.658497   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:31.094267   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:31.126053   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:31.155845   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:31.157620   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:31.595111   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:31.625862   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:31.659323   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:31.659393   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:32.095279   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:32.125599   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:32.199254   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:32.199409   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:32.594421   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:32.625606   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:32.655429   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:32.657475   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:33.094915   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:33.125310   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:33.154609   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:33.156659   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:33.594492   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:33.625457   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:33.654434   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:33.656859   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:34.094787   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:34.126012   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:34.155559   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:34.158068   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:34.606896   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:34.625733   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:34.655451   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:34.658409   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:35.094387   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:35.125741   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:35.155049   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:35.156962   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:35.595142   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:35.626314   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:35.656424   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:35.658188   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:36.094587   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:36.125299   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:36.157566   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:36.162381   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:36.594757   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:36.625338   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:36.654928   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:36.657667   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:37.095534   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:37.125174   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:37.154440   18249 kapi.go:107] duration metric: took 37.003171679s to wait for kubernetes.io/minikube-addons=registry ...
	I0828 16:53:37.156447   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:37.594798   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:37.625235   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:37.656908   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:38.095661   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:38.126092   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:38.158261   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:38.595348   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:38.625091   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:38.657913   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:39.094636   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:39.126184   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:39.157665   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:39.594133   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:39.625606   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:39.658035   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:40.095449   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:40.125725   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:40.157599   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:40.594861   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:40.625830   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:40.657531   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:41.095211   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:41.124902   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:41.158798   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:41.594588   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:41.625002   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:41.657786   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:42.095776   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:42.127039   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:42.158485   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:42.645960   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:42.647890   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:42.657722   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:43.095058   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:43.127772   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:43.157380   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:43.595802   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:43.626208   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:43.659191   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:44.095784   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:44.125689   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:44.157160   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:44.594967   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:44.625614   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:44.657657   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:45.098165   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:45.125532   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:45.157027   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:45.595371   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:45.626505   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:45.658717   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:46.094137   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:46.125930   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:46.159054   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:46.597552   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:46.625716   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:46.657534   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:47.095137   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:47.125905   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:47.158224   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:47.636222   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:47.637581   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:47.657044   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:48.094826   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:48.125355   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:48.157656   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:48.594813   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:48.631137   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:48.657624   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:49.095053   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:49.128446   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:49.157355   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:49.595223   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:49.626255   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:49.658186   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:50.095856   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:50.127379   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:50.158702   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:50.595643   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:50.698127   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:50.698171   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:51.094801   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:51.125567   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:51.157384   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:51.595613   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:51.627271   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:51.657145   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:52.101226   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:52.125053   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:52.157436   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:52.593985   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:52.625898   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:52.658285   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:53.095068   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:53.126152   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:53.157104   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:53.594124   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:53.626149   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:53.657735   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:54.099081   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:54.126193   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:54.157152   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:54.595009   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:54.626412   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:54.720927   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:55.094671   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:55.125251   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:55.156958   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:55.596323   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:55.624970   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:55.657746   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:56.094441   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:56.125622   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:56.156601   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:56.595765   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:56.630056   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:56.698961   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:57.094616   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:57.125818   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:57.157863   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:57.594274   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:57.624777   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:57.657816   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:58.096341   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:58.126916   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:58.158947   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:58.595441   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:58.625428   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:58.657100   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:59.095929   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:59.125671   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:59.157343   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:59.594697   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:59.625751   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:59.657338   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:00.095059   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:00.125731   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:00.157953   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:00.595257   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:00.627464   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:00.657563   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:01.094667   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:01.125904   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:01.157762   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:01.594499   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:01.624717   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:01.657505   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:02.094567   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:02.125907   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:02.196935   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:02.595038   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:02.625765   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:02.696647   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:03.094272   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:03.125427   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:03.157639   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:03.594871   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:03.625673   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:03.657841   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:04.094887   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:04.126789   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:04.157551   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:04.595035   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:04.627362   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:04.658298   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:05.095367   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:05.197028   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:05.197341   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:05.594590   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:05.625380   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:05.657085   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:06.095202   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:06.126191   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:06.156969   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:06.596094   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:06.625814   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:06.658641   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:07.100240   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:07.131987   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:07.158146   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:07.595588   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:07.625705   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:07.657218   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:08.141202   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:08.141936   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:08.170688   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:08.595335   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:08.625506   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:08.657914   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:09.097081   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:09.126472   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:09.157818   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:09.595778   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:09.625507   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:09.658020   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:10.095683   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:10.125569   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:10.157674   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:10.595427   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:10.626371   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:10.657765   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:11.094606   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:11.130408   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:11.158323   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:11.595040   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:11.626209   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:11.658014   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:12.095395   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:12.125926   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:12.157848   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:12.594680   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:12.625860   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:12.657412   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:13.094853   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:13.196216   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:13.196765   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:13.600021   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:13.626826   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:13.657927   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:14.095522   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:14.125684   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:14.157491   18249 kapi.go:107] duration metric: took 1m14.004214208s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0828 16:54:14.594716   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:14.625548   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:15.094682   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:15.125350   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:15.596546   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:15.625572   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:16.094125   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:16.125975   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:16.594260   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:16.625018   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:17.094891   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:17.125763   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:17.594205   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:17.626413   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:18.094280   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:18.125555   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:18.598192   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:18.627321   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:19.095258   18249 kapi.go:107] duration metric: took 1m16.504298837s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0828 16:54:19.097233   18249 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-990097 cluster.
	I0828 16:54:19.098992   18249 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0828 16:54:19.100337   18249 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0828 16:54:19.132159   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:19.626709   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:20.125928   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:20.626509   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:21.126771   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:21.625546   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:22.126321   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:22.626308   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:23.128207   18249 kapi.go:107] duration metric: took 1m22.507265973s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0828 16:54:23.129806   18249 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, inspektor-gadget, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0828 16:54:23.131012   18249 addons.go:510] duration metric: took 1m31.267643413s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server inspektor-gadget helm-tiller yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0828 16:54:23.131051   18249 start.go:246] waiting for cluster config update ...
	I0828 16:54:23.131069   18249 start.go:255] writing updated cluster config ...
	I0828 16:54:23.131315   18249 ssh_runner.go:195] Run: rm -f paused
	I0828 16:54:23.182950   18249 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 16:54:23.184758   18249 out.go:177] * Done! kubectl is now configured to use "addons-990097" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.659772558Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864618659745229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543383,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30ccf3db-1d24-47b7-b7d2-b8d59a101e68 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.660410135Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d342c21-cdbe-4434-b35f-4ec79f1e89f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.660493451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d342c21-cdbe-4434-b35f-4ec79f1e89f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.660922540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3176d505038280419e560c756e4cadb89509db105bffa88265467f2017be3774,PodSandboxId:cef02da7b66ed11b622cd3690b07d9e4a7b9f616adafd2950402c28f3f000671,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724864614208698784,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-a9f55e23-5044-48c9-a5ea-14e15cbb19c6,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d39abde-ee29-4a16-9798-84ca035d198c,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69871105d052e94de5d07cab30d591b8c4fd656d73816c78781b2112860e3fe,PodSandboxId:fb2a37a4329926b1da23ec405eb68148783a46b49c49b0ba78d60643ce61caa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724864612029211669,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0676fa72-54fc-4f84-8398-9fa6fe5690d5,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.
hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2854a478340381451b911b5768ee455787c1bbcc9946c76a56e81c7c43402731,PodSandboxId:869c9e3876dde22297a6c1d8a7fda0bf3f0cfc5bb110d3a1cb34b25baf408be9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724864053112728183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-q4hvm,io.kubernetes.pod.namespace: ingress-ng
inx,io.kubernetes.pod.uid: ff0eadf6-676d-45fe-80d5-d11090925146,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cb13e41307b5658457c95881ad7bc385756c8a6e0c884dd77ced9e7662188df0,PodSandboxId:653bf553c3fe1b26f7c07d71bceb65bd5a8f866d866aa0561ac6e8ffe31a773e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fa
daef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036713531958,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h8rvs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d937ff9-473a-4187-a50e-7cf052b30dc4,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515908142ae573504e393dfa4953480d861082c2862d4cc4db879d360029ae2c,PodSandboxId:e2e20a8025ba14e11434fa68c4f14158ccb3c89d05c3cb29424b2d0765ca5278,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367
d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036563996787,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqzdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d58556bc-d999-4b9b-91f6-93b53d5b8d2c,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/ra
ncher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724864032785543121,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:m
etrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d45197c3dbb8b652b301c39abd54c8209ca
5c020df1040b074aef59ec52bcf1,PodSandboxId:b33b27e79c7b3f2f7efffd6baa9cf67c5aacc6360a2007c5360dc400aeab6718,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1724864009955359061,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-znl98,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6deda0a2-a0db-4d93-b2ee-9436be933ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa1fc0446f9a2422091fa80156834f34785b868e02459d05e0c5bc85b7d8441,PodSandboxId:df9993caf8025077d0c6ff7a6a94ca2011ab83b17f188111868744a548d88437,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1724864003802062087,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-j24tf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda32bb5-afc7-4b0f-939f-fe0614025dc2,},Annotations:map[string]string{io.kubernetes.container.hash:
7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280,PodSandboxId:11dd8b068baf7811855beb8212d6899846a1fadeaf06be39f05c53364cd17d9b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724863993430364294,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3020f9b2-3535-4950-b84f-5387dcc8f455,},Annotations:map[
string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e8206a30fd6903e510907f9f5c38b1c30dc2ab4234ee365e91a3331dd3127a,PodSandboxId:b442e3b6ad7a06c8cf875a38710d69a359163a51eea3ad9c03b4097c71eb6980,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1724863985585592941,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-wr7ks,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 92061fbc-b8a2-4b6f-9ffa-c0ac60e817ab,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535
674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string
]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e
96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d342c21-cdbe-4434-b35f-4ec79f1e89f1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.691098270Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=927646eb-a54b-4bfd-8b86-044ff0269152 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.691182768Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=927646eb-a54b-4bfd-8b86-044ff0269152 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.692161433Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b906dc3c-337e-468c-8cbd-087f74408d3a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.693525910Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864618693499085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543383,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b906dc3c-337e-468c-8cbd-087f74408d3a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.694150845Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2797468-899a-42aa-bf5c-b2905bb214d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.694219132Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2797468-899a-42aa-bf5c-b2905bb214d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.694719029Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3176d505038280419e560c756e4cadb89509db105bffa88265467f2017be3774,PodSandboxId:cef02da7b66ed11b622cd3690b07d9e4a7b9f616adafd2950402c28f3f000671,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724864614208698784,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-a9f55e23-5044-48c9-a5ea-14e15cbb19c6,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d39abde-ee29-4a16-9798-84ca035d198c,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69871105d052e94de5d07cab30d591b8c4fd656d73816c78781b2112860e3fe,PodSandboxId:fb2a37a4329926b1da23ec405eb68148783a46b49c49b0ba78d60643ce61caa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724864612029211669,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0676fa72-54fc-4f84-8398-9fa6fe5690d5,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.
hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2854a478340381451b911b5768ee455787c1bbcc9946c76a56e81c7c43402731,PodSandboxId:869c9e3876dde22297a6c1d8a7fda0bf3f0cfc5bb110d3a1cb34b25baf408be9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724864053112728183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-q4hvm,io.kubernetes.pod.namespace: ingress-ng
inx,io.kubernetes.pod.uid: ff0eadf6-676d-45fe-80d5-d11090925146,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cb13e41307b5658457c95881ad7bc385756c8a6e0c884dd77ced9e7662188df0,PodSandboxId:653bf553c3fe1b26f7c07d71bceb65bd5a8f866d866aa0561ac6e8ffe31a773e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fa
daef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036713531958,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h8rvs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d937ff9-473a-4187-a50e-7cf052b30dc4,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515908142ae573504e393dfa4953480d861082c2862d4cc4db879d360029ae2c,PodSandboxId:e2e20a8025ba14e11434fa68c4f14158ccb3c89d05c3cb29424b2d0765ca5278,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367
d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036563996787,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqzdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d58556bc-d999-4b9b-91f6-93b53d5b8d2c,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/ra
ncher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724864032785543121,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:m
etrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d45197c3dbb8b652b301c39abd54c8209ca
5c020df1040b074aef59ec52bcf1,PodSandboxId:b33b27e79c7b3f2f7efffd6baa9cf67c5aacc6360a2007c5360dc400aeab6718,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1724864009955359061,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-znl98,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6deda0a2-a0db-4d93-b2ee-9436be933ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa1fc0446f9a2422091fa80156834f34785b868e02459d05e0c5bc85b7d8441,PodSandboxId:df9993caf8025077d0c6ff7a6a94ca2011ab83b17f188111868744a548d88437,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1724864003802062087,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-j24tf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda32bb5-afc7-4b0f-939f-fe0614025dc2,},Annotations:map[string]string{io.kubernetes.container.hash:
7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280,PodSandboxId:11dd8b068baf7811855beb8212d6899846a1fadeaf06be39f05c53364cd17d9b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724863993430364294,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3020f9b2-3535-4950-b84f-5387dcc8f455,},Annotations:map[
string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e8206a30fd6903e510907f9f5c38b1c30dc2ab4234ee365e91a3331dd3127a,PodSandboxId:b442e3b6ad7a06c8cf875a38710d69a359163a51eea3ad9c03b4097c71eb6980,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1724863985585592941,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-wr7ks,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 92061fbc-b8a2-4b6f-9ffa-c0ac60e817ab,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535
674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string
]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e
96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2797468-899a-42aa-bf5c-b2905bb214d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.729988816Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1662aaa-ac12-4da1-ae46-5eeb25319c5e name=/runtime.v1.RuntimeService/Version
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.730074164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1662aaa-ac12-4da1-ae46-5eeb25319c5e name=/runtime.v1.RuntimeService/Version
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.731122849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=892df9ee-aedd-4d08-b392-f038f42fea63 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.732414782Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864618732384141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543383,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=892df9ee-aedd-4d08-b392-f038f42fea63 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.733119530Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1a3005e-4429-4768-93f4-fa1de2f8b4f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.733218449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1a3005e-4429-4768-93f4-fa1de2f8b4f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.733654853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3176d505038280419e560c756e4cadb89509db105bffa88265467f2017be3774,PodSandboxId:cef02da7b66ed11b622cd3690b07d9e4a7b9f616adafd2950402c28f3f000671,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724864614208698784,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-a9f55e23-5044-48c9-a5ea-14e15cbb19c6,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d39abde-ee29-4a16-9798-84ca035d198c,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69871105d052e94de5d07cab30d591b8c4fd656d73816c78781b2112860e3fe,PodSandboxId:fb2a37a4329926b1da23ec405eb68148783a46b49c49b0ba78d60643ce61caa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724864612029211669,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0676fa72-54fc-4f84-8398-9fa6fe5690d5,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.
hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2854a478340381451b911b5768ee455787c1bbcc9946c76a56e81c7c43402731,PodSandboxId:869c9e3876dde22297a6c1d8a7fda0bf3f0cfc5bb110d3a1cb34b25baf408be9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724864053112728183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-q4hvm,io.kubernetes.pod.namespace: ingress-ng
inx,io.kubernetes.pod.uid: ff0eadf6-676d-45fe-80d5-d11090925146,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cb13e41307b5658457c95881ad7bc385756c8a6e0c884dd77ced9e7662188df0,PodSandboxId:653bf553c3fe1b26f7c07d71bceb65bd5a8f866d866aa0561ac6e8ffe31a773e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fa
daef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036713531958,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h8rvs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d937ff9-473a-4187-a50e-7cf052b30dc4,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515908142ae573504e393dfa4953480d861082c2862d4cc4db879d360029ae2c,PodSandboxId:e2e20a8025ba14e11434fa68c4f14158ccb3c89d05c3cb29424b2d0765ca5278,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367
d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036563996787,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqzdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d58556bc-d999-4b9b-91f6-93b53d5b8d2c,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/ra
ncher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724864032785543121,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:m
etrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d45197c3dbb8b652b301c39abd54c8209ca
5c020df1040b074aef59ec52bcf1,PodSandboxId:b33b27e79c7b3f2f7efffd6baa9cf67c5aacc6360a2007c5360dc400aeab6718,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1724864009955359061,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-znl98,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6deda0a2-a0db-4d93-b2ee-9436be933ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa1fc0446f9a2422091fa80156834f34785b868e02459d05e0c5bc85b7d8441,PodSandboxId:df9993caf8025077d0c6ff7a6a94ca2011ab83b17f188111868744a548d88437,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1724864003802062087,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-j24tf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda32bb5-afc7-4b0f-939f-fe0614025dc2,},Annotations:map[string]string{io.kubernetes.container.hash:
7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280,PodSandboxId:11dd8b068baf7811855beb8212d6899846a1fadeaf06be39f05c53364cd17d9b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724863993430364294,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3020f9b2-3535-4950-b84f-5387dcc8f455,},Annotations:map[
string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e8206a30fd6903e510907f9f5c38b1c30dc2ab4234ee365e91a3331dd3127a,PodSandboxId:b442e3b6ad7a06c8cf875a38710d69a359163a51eea3ad9c03b4097c71eb6980,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1724863985585592941,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-wr7ks,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 92061fbc-b8a2-4b6f-9ffa-c0ac60e817ab,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535
674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string
]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e
96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1a3005e-4429-4768-93f4-fa1de2f8b4f7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.764143374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b2631349-2ced-49d9-b90d-b53306f7426c name=/runtime.v1.RuntimeService/Version
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.764266572Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b2631349-2ced-49d9-b90d-b53306f7426c name=/runtime.v1.RuntimeService/Version
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.765514458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df174915-a185-49ba-bd01-97cb13b08d6f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.766782089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864618766730708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:543383,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df174915-a185-49ba-bd01-97cb13b08d6f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.767318775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46e136a7-1c6f-45b4-b41e-5427a029f4b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.767385320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46e136a7-1c6f-45b4-b41e-5427a029f4b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:03:38 addons-990097 crio[658]: time="2024-08-28 17:03:38.767817059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3176d505038280419e560c756e4cadb89509db105bffa88265467f2017be3774,PodSandboxId:cef02da7b66ed11b622cd3690b07d9e4a7b9f616adafd2950402c28f3f000671,Metadata:&ContainerMetadata{Name:helper-pod,Attempt:0,},Image:&ImageSpec{Image:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824,State:CONTAINER_EXITED,CreatedAt:1724864614208698784,Labels:map[string]string{io.kubernetes.container.name: helper-pod,io.kubernetes.pod.name: helper-pod-delete-pvc-a9f55e23-5044-48c9-a5ea-14e15cbb19c6,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 1d39abde-ee29-4a16-9798-84ca035d198c,},Annotations:map[string]string{io.kubernetes.container.hash: 973dbf55,io.kubernetes.container.restartCoun
t: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e69871105d052e94de5d07cab30d591b8c4fd656d73816c78781b2112860e3fe,PodSandboxId:fb2a37a4329926b1da23ec405eb68148783a46b49c49b0ba78d60643ce61caa3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65ad0d468eb1c558bf7f4e64e790f586e9eda649ee9f130cd0e835b292bbc5ac,State:CONTAINER_EXITED,CreatedAt:1724864612029211669,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: test-local-path,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0676fa72-54fc-4f84-8398-9fa6fe5690d5,},Annotations:map[string]string{io.kubernetes.container.hash: dd3595ac,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.
hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2854a478340381451b911b5768ee455787c1bbcc9946c76a56e81c7c43402731,PodSandboxId:869c9e3876dde22297a6c1d8a7fda0bf3f0cfc5bb110d3a1cb34b25baf408be9,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1724864053112728183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-q4hvm,io.kubernetes.pod.namespace: ingress-ng
inx,io.kubernetes.pod.uid: ff0eadf6-676d-45fe-80d5-d11090925146,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cb13e41307b5658457c95881ad7bc385756c8a6e0c884dd77ced9e7662188df0,PodSandboxId:653bf553c3fe1b26f7c07d71bceb65bd5a8f866d866aa0561ac6e8ffe31a773e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fa
daef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036713531958,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-h8rvs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d937ff9-473a-4187-a50e-7cf052b30dc4,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515908142ae573504e393dfa4953480d861082c2862d4cc4db879d360029ae2c,PodSandboxId:e2e20a8025ba14e11434fa68c4f14158ccb3c89d05c3cb29424b2d0765ca5278,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367
d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036563996787,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqzdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d58556bc-d999-4b9b-91f6-93b53d5b8d2c,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/ra
ncher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724864032785543121,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:m
etrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d45197c3dbb8b652b301c39abd54c8209ca
5c020df1040b074aef59ec52bcf1,PodSandboxId:b33b27e79c7b3f2f7efffd6baa9cf67c5aacc6360a2007c5360dc400aeab6718,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5d78bb8f226e8d943746243233f733db4e80a8d6794f6d193b12b811bcb6cd34,State:CONTAINER_RUNNING,CreatedAt:1724864009955359061,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-769b77f747-znl98,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6deda0a2-a0db-4d93-b2ee-9436be933ce2,},Annotations:map[string]string{io.kubernetes.container.hash: 6472789b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfa1fc0446f9a2422091fa80156834f34785b868e02459d05e0c5bc85b7d8441,PodSandboxId:df9993caf8025077d0c6ff7a6a94ca2011ab83b17f188111868744a548d88437,Metadata:&ContainerMetadata{Name:nvidia-device-plugin-ctr,Attempt:0,},Image:&ImageSpec{Image:nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:159abe21a6880acafcba64b5e25c48b3e74134ca6823dc553a29c127693ace3e,State:CONTAINER_RUNNING,CreatedAt:1724864003802062087,Labels:map[string]string{io.kubernetes.container.name: nvidia-device-plugin-ctr,io.kubernetes.pod.name: nvidia-device-plugin-daemonset-j24tf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fda32bb5-afc7-4b0f-939f-fe0614025dc2,},Annotations:map[string]string{io.kubernetes.container.hash:
7c4b2818,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280,PodSandboxId:11dd8b068baf7811855beb8212d6899846a1fadeaf06be39f05c53364cd17d9b,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1724863993430364294,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3020f9b2-3535-4950-b84f-5387dcc8f455,},Annotations:map[
string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e8206a30fd6903e510907f9f5c38b1c30dc2ab4234ee365e91a3331dd3127a,PodSandboxId:b442e3b6ad7a06c8cf875a38710d69a359163a51eea3ad9c03b4097c71eb6980,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1724863985585592941,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-b48cc5f79-wr7ks,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 92061fbc-b8a2-4b6f-9ffa-c0ac60e817ab,},Annotations:map[string]string{io.kubernetes.container.hash: b375e3d3,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535
674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string
]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandl
er:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Im
ageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e
96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46e136a7-1c6f-45b4-b41e-5427a029f4b0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	3176d50503828       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             4 seconds ago       Exited              helper-pod                 0                   cef02da7b66ed       helper-pod-delete-pvc-a9f55e23-5044-48c9-a5ea-14e15cbb19c6
	e69871105d052       docker.io/library/busybox@sha256:50aa4698fa6262977cff89181b2664b99d8a56dbca847bf62f2ef04854597cf8                            6 seconds ago       Exited              busybox                    0                   fb2a37a432992       test-local-path
	7d927dbf90a83       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              41 seconds ago      Running             nginx                      0                   4d68d074fbaf6       nginx
	c026a720fa74e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                   0                   736ed095eb5c9       gcp-auth-89d5ffd79-hhsh7
	2854a47834038       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                 0                   869c9e3876dde       ingress-nginx-controller-bc57996ff-q4hvm
	cb13e41307b56       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              patch                      0                   653bf553c3fe1       ingress-nginx-admission-patch-h8rvs
	515908142ae57       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                     0                   e2e20a8025ba1       ingress-nginx-admission-create-dqzdf
	5bd52d706a171       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             9 minutes ago       Running             local-path-provisioner     0                   7bde8dc056090       local-path-provisioner-86d989889c-fs8wf
	9760e94848e1a       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        9 minutes ago       Running             metrics-server             0                   c79990266e87d       metrics-server-84c5f94fbc-s6z6n
	0d45197c3dbb8       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               10 minutes ago      Running             cloud-spanner-emulator     0                   b33b27e79c7b3       cloud-spanner-emulator-769b77f747-znl98
	cfa1fc0446f9a       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   df9993caf8025       nvidia-device-plugin-daemonset-j24tf
	e9fe58775a0c9       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns       0                   11dd8b068baf7       kube-ingress-dns-minikube
	02e8206a30fd6       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                  10 minutes ago      Running             tiller                     0                   b442e3b6ad7a0       tiller-deploy-b48cc5f79-wr7ks
	092298cdfb616       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner        0                   c61ef1e53e51b       storage-provisioner
	04f71727199d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             10 minutes ago      Running             coredns                    0                   50cdf2ec92991       coredns-6f6b679f8f-8gjc6
	f41de974958b8       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             10 minutes ago      Running             kube-proxy                 0                   37e7fe6fa66b5       kube-proxy-8qj9l
	7c59931085105       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             10 minutes ago      Running             kube-scheduler             0                   f5c9bab6fb293       kube-scheduler-addons-990097
	e7f9f99f0e0ad       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             10 minutes ago      Running             kube-apiserver             0                   fcc77a679af87       kube-apiserver-addons-990097
	b8d25fadc3e3b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             10 minutes ago      Running             kube-controller-manager    0                   2ff31a06164b2       kube-controller-manager-addons-990097
	f5afe4e2c7c30       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago      Running             etcd                       0                   3e4bbd88d6334       etcd-addons-990097
	
	
	==> coredns [04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b] <==
	[INFO] 127.0.0.1:43936 - 36274 "HINFO IN 1185575041321747915.1095525017323975341. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010495017s
	[INFO] 10.244.0.7:36545 - 56598 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00046725s
	[INFO] 10.244.0.7:36545 - 34323 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122837s
	[INFO] 10.244.0.7:40812 - 34220 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000159278s
	[INFO] 10.244.0.7:40812 - 30894 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094014s
	[INFO] 10.244.0.7:51634 - 55543 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000178288s
	[INFO] 10.244.0.7:51634 - 16073 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087886s
	[INFO] 10.244.0.7:58682 - 5261 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000220947s
	[INFO] 10.244.0.7:58682 - 20879 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00015173s
	[INFO] 10.244.0.7:34574 - 59863 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142024s
	[INFO] 10.244.0.7:34574 - 27092 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000153861s
	[INFO] 10.244.0.7:47702 - 54016 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074647s
	[INFO] 10.244.0.7:47702 - 51998 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067543s
	[INFO] 10.244.0.7:41963 - 59886 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068451s
	[INFO] 10.244.0.7:41963 - 56300 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027312s
	[INFO] 10.244.0.7:43940 - 2554 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010857s
	[INFO] 10.244.0.7:43940 - 48379 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072487s
	[INFO] 10.244.0.22:56224 - 47882 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000420049s
	[INFO] 10.244.0.22:50407 - 64319 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000234351s
	[INFO] 10.244.0.22:57980 - 2289 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127832s
	[INFO] 10.244.0.22:51961 - 33598 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000075597s
	[INFO] 10.244.0.22:37745 - 53825 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120959s
	[INFO] 10.244.0.22:46423 - 60876 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059568s
	[INFO] 10.244.0.22:56705 - 36016 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000732573s
	[INFO] 10.244.0.22:55859 - 40874 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001065258s
	
	
	==> describe nodes <==
	Name:               addons-990097
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-990097
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=addons-990097
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T16_52_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-990097
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 16:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-990097
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:03:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:03:18 +0000   Wed, 28 Aug 2024 16:52:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:03:18 +0000   Wed, 28 Aug 2024 16:52:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:03:18 +0000   Wed, 28 Aug 2024 16:52:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:03:18 +0000   Wed, 28 Aug 2024 16:52:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-990097
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6fc997bea7fd463bb1b99884632d7f13
	  System UUID:                6fc997be-a7fd-463b-b1b9-9884632d7f13
	  Boot ID:                    c2f58d05-673b-4f75-ad50-a0fe6c092504
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-769b77f747-znl98     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  gcp-auth                    gcp-auth-89d5ffd79-hhsh7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-q4hvm    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-8gjc6                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-990097                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-990097                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-990097       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-8qj9l                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-990097                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-s6z6n             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 nvidia-device-plugin-daemonset-j24tf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 tiller-deploy-b48cc5f79-wr7ks               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-86d989889c-fs8wf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-990097 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-990097 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-990097 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-990097 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-990097 event: Registered Node addons-990097 in Controller
	
	
	==> dmesg <==
	[  +5.451135] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.386769] systemd-fstab-generator[1425]: Ignoring "noauto" option for root device
	[  +4.834992] kauditd_printk_skb: 110 callbacks suppressed
	[Aug28 16:53] kauditd_printk_skb: 190 callbacks suppressed
	[ +11.495711] kauditd_printk_skb: 39 callbacks suppressed
	[ +27.839945] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.867762] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.427541] kauditd_printk_skb: 12 callbacks suppressed
	[Aug28 16:54] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.057528] kauditd_printk_skb: 97 callbacks suppressed
	[ +11.649408] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.118173] kauditd_printk_skb: 45 callbacks suppressed
	[ +23.135978] kauditd_printk_skb: 6 callbacks suppressed
	[Aug28 16:55] kauditd_printk_skb: 30 callbacks suppressed
	[Aug28 16:56] kauditd_printk_skb: 28 callbacks suppressed
	[Aug28 16:59] kauditd_printk_skb: 28 callbacks suppressed
	[Aug28 17:02] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.029063] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.016574] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.965420] kauditd_printk_skb: 11 callbacks suppressed
	[Aug28 17:03] kauditd_printk_skb: 10 callbacks suppressed
	[ +15.023131] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.275073] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.034230] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.067261] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca] <==
	{"level":"warn","ts":"2024-08-28T16:53:22.460877Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"316.039074ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:53:22.460910Z","caller":"traceutil/trace.go:171","msg":"trace[182306142] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:898; }","duration":"316.075516ms","start":"2024-08-28T16:53:22.144828Z","end":"2024-08-28T16:53:22.460903Z","steps":["trace[182306142] 'range keys from in-memory index tree'  (duration: 315.994788ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:53:22.460930Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T16:53:22.144794Z","time spent":"316.129801ms","remote":"127.0.0.1:52266","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-28T16:53:22.461042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.018812ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:53:22.461070Z","caller":"traceutil/trace.go:171","msg":"trace[1208754861] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:898; }","duration":"314.048115ms","start":"2024-08-28T16:53:22.147017Z","end":"2024-08-28T16:53:22.461065Z","steps":["trace[1208754861] 'range keys from in-memory index tree'  (duration: 313.969862ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:53:22.461087Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T16:53:22.146944Z","time spent":"314.138242ms","remote":"127.0.0.1:52266","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-28T16:53:34.593913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.743715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-89d5ffd79-hhsh7.17eff2a74e25dc97\" ","response":"range_response_count:1 size:781"}
	{"level":"info","ts":"2024-08-28T16:53:34.593958Z","caller":"traceutil/trace.go:171","msg":"trace[1783725651] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-89d5ffd79-hhsh7.17eff2a74e25dc97; range_end:; response_count:1; response_revision:925; }","duration":"207.796464ms","start":"2024-08-28T16:53:34.386149Z","end":"2024-08-28T16:53:34.593946Z","steps":["trace[1783725651] 'range keys from in-memory index tree'  (duration: 207.618074ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:53:42.558909Z","caller":"traceutil/trace.go:171","msg":"trace[1823174305] transaction","detail":"{read_only:false; response_revision:947; number_of_response:1; }","duration":"287.407068ms","start":"2024-08-28T16:53:42.271483Z","end":"2024-08-28T16:53:42.558890Z","steps":["trace[1823174305] 'process raft request'  (duration: 287.285479ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:53:47.622998Z","caller":"traceutil/trace.go:171","msg":"trace[932674930] transaction","detail":"{read_only:false; response_revision:961; number_of_response:1; }","duration":"104.356711ms","start":"2024-08-28T16:53:47.518628Z","end":"2024-08-28T16:53:47.622985Z","steps":["trace[932674930] 'process raft request'  (duration: 104.239303ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:54:55.302592Z","caller":"traceutil/trace.go:171","msg":"trace[1524241447] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"230.314513ms","start":"2024-08-28T16:54:55.072243Z","end":"2024-08-28T16:54:55.302557Z","steps":["trace[1524241447] 'process raft request'  (duration: 229.745464ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:54:55.303259Z","caller":"traceutil/trace.go:171","msg":"trace[1962350705] linearizableReadLoop","detail":"{readStateIndex:1311; appliedIndex:1310; }","duration":"198.610451ms","start":"2024-08-28T16:54:55.103505Z","end":"2024-08-28T16:54:55.302115Z","steps":["trace[1962350705] 'read index received'  (duration: 198.397171ms)","trace[1962350705] 'applied index is now lower than readState.Index'  (duration: 212.527µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T16:54:55.303540Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.965293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-08-28T16:54:55.303613Z","caller":"traceutil/trace.go:171","msg":"trace[178375171] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1269; }","duration":"200.115796ms","start":"2024-08-28T16:54:55.103483Z","end":"2024-08-28T16:54:55.303599Z","steps":["trace[178375171] 'agreement among raft nodes before linearized reading'  (duration: 199.893413ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:02:42.414396Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1528}
	{"level":"info","ts":"2024-08-28T17:02:42.450275Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1528,"took":"35.104221ms","hash":1413996905,"current-db-size-bytes":6000640,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3461120,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-08-28T17:02:42.450436Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1413996905,"revision":1528,"compact-revision":-1}
	{"level":"info","ts":"2024-08-28T17:02:53.325278Z","caller":"traceutil/trace.go:171","msg":"trace[2108371229] linearizableReadLoop","detail":"{readStateIndex:2211; appliedIndex:2210; }","duration":"459.294326ms","start":"2024-08-28T17:02:52.865949Z","end":"2024-08-28T17:02:53.325243Z","steps":["trace[2108371229] 'read index received'  (duration: 459.150699ms)","trace[2108371229] 'applied index is now lower than readState.Index'  (duration: 142.943µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-28T17:02:53.325511Z","caller":"traceutil/trace.go:171","msg":"trace[424925906] transaction","detail":"{read_only:false; response_revision:2063; number_of_response:1; }","duration":"525.181818ms","start":"2024-08-28T17:02:52.800315Z","end":"2024-08-28T17:02:53.325497Z","steps":["trace[424925906] 'process raft request'  (duration: 524.829974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:02:53.325765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.733213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-08-28T17:02:53.325825Z","caller":"traceutil/trace.go:171","msg":"trace[162657861] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:2063; }","duration":"368.829874ms","start":"2024-08-28T17:02:52.956983Z","end":"2024-08-28T17:02:53.325812Z","steps":["trace[162657861] 'agreement among raft nodes before linearized reading'  (duration: 368.661415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:02:53.325863Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T17:02:52.956950Z","time spent":"368.907244ms","remote":"127.0.0.1:52368","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":577,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"warn","ts":"2024-08-28T17:02:53.326000Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"460.0423ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T17:02:53.326031Z","caller":"traceutil/trace.go:171","msg":"trace[1263793325] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2063; }","duration":"460.081559ms","start":"2024-08-28T17:02:52.865944Z","end":"2024-08-28T17:02:53.326026Z","steps":["trace[1263793325] 'agreement among raft nodes before linearized reading'  (duration: 460.033068ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:02:53.327962Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T17:02:52.800270Z","time spent":"525.3055ms","remote":"127.0.0.1:52368","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":485,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2016 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	
	
	==> gcp-auth [c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4] <==
	2024/08/28 16:54:18 GCP Auth Webhook started!
	2024/08/28 16:54:23 Ready to marshal response ...
	2024/08/28 16:54:23 Ready to write response ...
	2024/08/28 16:54:23 Ready to marshal response ...
	2024/08/28 16:54:23 Ready to write response ...
	2024/08/28 16:54:23 Ready to marshal response ...
	2024/08/28 16:54:23 Ready to write response ...
	2024/08/28 17:02:37 Ready to marshal response ...
	2024/08/28 17:02:37 Ready to write response ...
	2024/08/28 17:02:46 Ready to marshal response ...
	2024/08/28 17:02:46 Ready to write response ...
	2024/08/28 17:02:50 Ready to marshal response ...
	2024/08/28 17:02:50 Ready to write response ...
	2024/08/28 17:03:06 Ready to marshal response ...
	2024/08/28 17:03:06 Ready to write response ...
	2024/08/28 17:03:23 Ready to marshal response ...
	2024/08/28 17:03:23 Ready to write response ...
	2024/08/28 17:03:23 Ready to marshal response ...
	2024/08/28 17:03:23 Ready to write response ...
	2024/08/28 17:03:33 Ready to marshal response ...
	2024/08/28 17:03:33 Ready to write response ...
	
	
	==> kernel <==
	 17:03:39 up 11 min,  0 users,  load average: 1.54, 0.65, 0.45
	Linux addons-990097 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83] <==
	E0828 16:54:47.902087       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0828 16:54:47.903698       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.64.33:443: connect: connection refused" logger="UnhandledError"
	E0828 16:54:47.909556       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.64.33:443: connect: connection refused" logger="UnhandledError"
	I0828 16:54:47.980457       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0828 17:02:44.290693       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0828 17:02:45.318931       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0828 17:02:50.191896       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0828 17:02:50.441092       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.222.232"}
	I0828 17:03:01.316914       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0828 17:03:22.465921       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.465977       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.486678       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.486850       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.594997       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.595115       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.613013       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.614969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.617986       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.618315       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0828 17:03:23.613355       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0828 17:03:23.619379       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0828 17:03:23.735621       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880] <==
	I0828 17:03:16.855815       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-990097"
	I0828 17:03:18.798190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-990097"
	I0828 17:03:22.655170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="15.537µs"
	E0828 17:03:23.615266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0828 17:03:23.620738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0828 17:03:23.738035       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:24.960738       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:24.960907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:25.028777       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:25.028817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:25.313510       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:25.313622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:27.041946       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:27.041997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:27.202008       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:27.202059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:27.389872       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:27.389922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:30.880057       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:30.880167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:32.806691       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:32.806752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:03:33.018072       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:03:33.018194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:03:37.686251       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="5.559µs"
	
	
	==> kube-proxy [f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 16:52:52.089415       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 16:52:52.099940       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	E0828 16:52:52.099997       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 16:52:52.173377       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 16:52:52.173438       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 16:52:52.173468       1 server_linux.go:169] "Using iptables Proxier"
	I0828 16:52:52.175943       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 16:52:52.176378       1 server.go:483] "Version info" version="v1.31.0"
	I0828 16:52:52.176391       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 16:52:52.177695       1 config.go:197] "Starting service config controller"
	I0828 16:52:52.177716       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 16:52:52.177745       1 config.go:104] "Starting endpoint slice config controller"
	I0828 16:52:52.177750       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 16:52:52.178237       1 config.go:326] "Starting node config controller"
	I0828 16:52:52.178244       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 16:52:52.278337       1 shared_informer.go:320] Caches are synced for node config
	I0828 16:52:52.278370       1 shared_informer.go:320] Caches are synced for service config
	I0828 16:52:52.278391       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093] <==
	W0828 16:52:43.762343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0828 16:52:43.762378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:43.767505       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 16:52:43.767603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.593944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0828 16:52:44.593990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.649260       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 16:52:44.649415       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0828 16:52:44.667387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0828 16:52:44.667479       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.675396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 16:52:44.675487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.740397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0828 16:52:44.740445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.770930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 16:52:44.770991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.825118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0828 16:52:44.825170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.869231       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 16:52:44.869366       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.933958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0828 16:52:44.934034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.988755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 16:52:44.988802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0828 16:52:47.648648       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 17:03:37 addons-990097 kubelet[1192]: I0828 17:03:37.183656    1192 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cef02da7b66ed11b622cd3690b07d9e4a7b9f616adafd2950402c28f3f000671"
	Aug 28 17:03:37 addons-990097 kubelet[1192]: I0828 17:03:37.307447    1192 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkjhg\" (UniqueName: \"kubernetes.io/projected/ac643701-fec6-49dd-9d5a-55754534553a-kube-api-access-lkjhg\") pod \"ac643701-fec6-49dd-9d5a-55754534553a\" (UID: \"ac643701-fec6-49dd-9d5a-55754534553a\") "
	Aug 28 17:03:37 addons-990097 kubelet[1192]: I0828 17:03:37.307564    1192 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ac643701-fec6-49dd-9d5a-55754534553a-gcp-creds\") pod \"ac643701-fec6-49dd-9d5a-55754534553a\" (UID: \"ac643701-fec6-49dd-9d5a-55754534553a\") "
	Aug 28 17:03:37 addons-990097 kubelet[1192]: I0828 17:03:37.307988    1192 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac643701-fec6-49dd-9d5a-55754534553a-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ac643701-fec6-49dd-9d5a-55754534553a" (UID: "ac643701-fec6-49dd-9d5a-55754534553a"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 28 17:03:37 addons-990097 kubelet[1192]: I0828 17:03:37.317565    1192 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac643701-fec6-49dd-9d5a-55754534553a-kube-api-access-lkjhg" (OuterVolumeSpecName: "kube-api-access-lkjhg") pod "ac643701-fec6-49dd-9d5a-55754534553a" (UID: "ac643701-fec6-49dd-9d5a-55754534553a"). InnerVolumeSpecName "kube-api-access-lkjhg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:03:37 addons-990097 kubelet[1192]: I0828 17:03:37.409002    1192 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lkjhg\" (UniqueName: \"kubernetes.io/projected/ac643701-fec6-49dd-9d5a-55754534553a-kube-api-access-lkjhg\") on node \"addons-990097\" DevicePath \"\""
	Aug 28 17:03:37 addons-990097 kubelet[1192]: I0828 17:03:37.409053    1192 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ac643701-fec6-49dd-9d5a-55754534553a-gcp-creds\") on node \"addons-990097\" DevicePath \"\""
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.115924    1192 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqvvg\" (UniqueName: \"kubernetes.io/projected/1ab53ee3-0865-49b3-8fd0-7f176587e4d5-kube-api-access-mqvvg\") pod \"1ab53ee3-0865-49b3-8fd0-7f176587e4d5\" (UID: \"1ab53ee3-0865-49b3-8fd0-7f176587e4d5\") "
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.119441    1192 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ab53ee3-0865-49b3-8fd0-7f176587e4d5-kube-api-access-mqvvg" (OuterVolumeSpecName: "kube-api-access-mqvvg") pod "1ab53ee3-0865-49b3-8fd0-7f176587e4d5" (UID: "1ab53ee3-0865-49b3-8fd0-7f176587e4d5"). InnerVolumeSpecName "kube-api-access-mqvvg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.191885    1192 scope.go:117] "RemoveContainer" containerID="ab61185fc300f9c98b4086ea7cf17b98e0617751556aa0cd52b53a93aff0e7a0"
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.216512    1192 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4rf6\" (UniqueName: \"kubernetes.io/projected/28ff509c-2b4f-4dbc-ac62-07fa93fce1c0-kube-api-access-n4rf6\") pod \"28ff509c-2b4f-4dbc-ac62-07fa93fce1c0\" (UID: \"28ff509c-2b4f-4dbc-ac62-07fa93fce1c0\") "
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.216593    1192 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mqvvg\" (UniqueName: \"kubernetes.io/projected/1ab53ee3-0865-49b3-8fd0-7f176587e4d5-kube-api-access-mqvvg\") on node \"addons-990097\" DevicePath \"\""
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.227337    1192 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ff509c-2b4f-4dbc-ac62-07fa93fce1c0-kube-api-access-n4rf6" (OuterVolumeSpecName: "kube-api-access-n4rf6") pod "28ff509c-2b4f-4dbc-ac62-07fa93fce1c0" (UID: "28ff509c-2b4f-4dbc-ac62-07fa93fce1c0"). InnerVolumeSpecName "kube-api-access-n4rf6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.297357    1192 scope.go:117] "RemoveContainer" containerID="ab61185fc300f9c98b4086ea7cf17b98e0617751556aa0cd52b53a93aff0e7a0"
	Aug 28 17:03:38 addons-990097 kubelet[1192]: E0828 17:03:38.298476    1192 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab61185fc300f9c98b4086ea7cf17b98e0617751556aa0cd52b53a93aff0e7a0\": container with ID starting with ab61185fc300f9c98b4086ea7cf17b98e0617751556aa0cd52b53a93aff0e7a0 not found: ID does not exist" containerID="ab61185fc300f9c98b4086ea7cf17b98e0617751556aa0cd52b53a93aff0e7a0"
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.298522    1192 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab61185fc300f9c98b4086ea7cf17b98e0617751556aa0cd52b53a93aff0e7a0"} err="failed to get container status \"ab61185fc300f9c98b4086ea7cf17b98e0617751556aa0cd52b53a93aff0e7a0\": rpc error: code = NotFound desc = could not find container \"ab61185fc300f9c98b4086ea7cf17b98e0617751556aa0cd52b53a93aff0e7a0\": container with ID starting with ab61185fc300f9c98b4086ea7cf17b98e0617751556aa0cd52b53a93aff0e7a0 not found: ID does not exist"
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.298544    1192 scope.go:117] "RemoveContainer" containerID="d2ba4738476529216550e3aae8bc4540c114d7c2191db4c8827beef1a52f90bb"
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.317044    1192 scope.go:117] "RemoveContainer" containerID="d2ba4738476529216550e3aae8bc4540c114d7c2191db4c8827beef1a52f90bb"
	Aug 28 17:03:38 addons-990097 kubelet[1192]: E0828 17:03:38.318220    1192 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2ba4738476529216550e3aae8bc4540c114d7c2191db4c8827beef1a52f90bb\": container with ID starting with d2ba4738476529216550e3aae8bc4540c114d7c2191db4c8827beef1a52f90bb not found: ID does not exist" containerID="d2ba4738476529216550e3aae8bc4540c114d7c2191db4c8827beef1a52f90bb"
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.318252    1192 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ba4738476529216550e3aae8bc4540c114d7c2191db4c8827beef1a52f90bb"} err="failed to get container status \"d2ba4738476529216550e3aae8bc4540c114d7c2191db4c8827beef1a52f90bb\": rpc error: code = NotFound desc = could not find container \"d2ba4738476529216550e3aae8bc4540c114d7c2191db4c8827beef1a52f90bb\": container with ID starting with d2ba4738476529216550e3aae8bc4540c114d7c2191db4c8827beef1a52f90bb not found: ID does not exist"
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.318429    1192 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-n4rf6\" (UniqueName: \"kubernetes.io/projected/28ff509c-2b4f-4dbc-ac62-07fa93fce1c0-kube-api-access-n4rf6\") on node \"addons-990097\" DevicePath \"\""
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.408009    1192 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ab53ee3-0865-49b3-8fd0-7f176587e4d5" path="/var/lib/kubelet/pods/1ab53ee3-0865-49b3-8fd0-7f176587e4d5/volumes"
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.408558    1192 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d39abde-ee29-4a16-9798-84ca035d198c" path="/var/lib/kubelet/pods/1d39abde-ee29-4a16-9798-84ca035d198c/volumes"
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.408991    1192 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ff509c-2b4f-4dbc-ac62-07fa93fce1c0" path="/var/lib/kubelet/pods/28ff509c-2b4f-4dbc-ac62-07fa93fce1c0/volumes"
	Aug 28 17:03:38 addons-990097 kubelet[1192]: I0828 17:03:38.409675    1192 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac643701-fec6-49dd-9d5a-55754534553a" path="/var/lib/kubelet/pods/ac643701-fec6-49dd-9d5a-55754534553a/volumes"
	
	
	==> storage-provisioner [092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6] <==
	I0828 16:52:58.911009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 16:52:58.964276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 16:52:59.019593       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 16:52:59.214120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 16:52:59.226396       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-990097_2298ca45-abf7-4f73-afd1-326d2fb9f78e!
	I0828 16:52:59.227671       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40ef84ee-3904-40bf-b67a-f3ab38dd9ae4", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-990097_2298ca45-abf7-4f73-afd1-326d2fb9f78e became leader
	I0828 16:52:59.636127       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-990097_2298ca45-abf7-4f73-afd1-326d2fb9f78e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-990097 -n addons-990097
helpers_test.go:261: (dbg) Run:  kubectl --context addons-990097 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-dqzdf ingress-nginx-admission-patch-h8rvs
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-990097 describe pod busybox ingress-nginx-admission-create-dqzdf ingress-nginx-admission-patch-h8rvs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-990097 describe pod busybox ingress-nginx-admission-create-dqzdf ingress-nginx-admission-patch-h8rvs: exit status 1 (71.884534ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-990097/192.168.39.195
	Start Time:       Wed, 28 Aug 2024 16:54:23 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-58r55 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-58r55:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m16s                   default-scheduler  Successfully assigned default/busybox to addons-990097
	  Normal   Pulling    7m37s (x4 over 9m16s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m37s (x4 over 9m15s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m37s (x4 over 9m15s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m24s (x6 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m15s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dqzdf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-h8rvs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-990097 describe pod busybox ingress-nginx-admission-create-dqzdf ingress-nginx-admission-patch-h8rvs: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.92s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (156.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-990097 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-990097 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-990097 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [001cf7f5-0df7-4a5a-aad0-71b14bcde5db] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [001cf7f5-0df7-4a5a-aad0-71b14bcde5db] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.004355161s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-990097 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.083452982s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-990097 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.195
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-990097 addons disable ingress-dns --alsologtostderr -v=1: (1.103450785s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-990097 addons disable ingress --alsologtostderr -v=1: (7.688841449s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-990097 -n addons-990097
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-990097 logs -n 25: (1.214885912s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-382773                                                                     | download-only-382773 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| delete  | -p download-only-238617                                                                     | download-only-238617 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| delete  | -p download-only-382773                                                                     | download-only-382773 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-802579 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC |                     |
	|         | binary-mirror-802579                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34799                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-802579                                                                     | binary-mirror-802579 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| addons  | disable dashboard -p                                                                        | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC |                     |
	|         | addons-990097                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC |                     |
	|         | addons-990097                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-990097 --wait=true                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:54 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:02 UTC | 28 Aug 24 17:02 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:02 UTC | 28 Aug 24 17:02 UTC |
	|         | addons-990097                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-990097 ssh curl -s                                                                   | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-990097 addons                                                                        | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-990097 addons                                                                        | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-990097 ssh cat                                                                       | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | /opt/local-path-provisioner/pvc-a9f55e23-5044-48c9-a5ea-14e15cbb19c6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-990097 ip                                                                            | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | -p addons-990097                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | -p addons-990097                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | addons-990097                                                                               |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-990097 ip                                                                            | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 16:52:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 16:52:03.553302   18249 out.go:345] Setting OutFile to fd 1 ...
	I0828 16:52:03.553558   18249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:52:03.553567   18249 out.go:358] Setting ErrFile to fd 2...
	I0828 16:52:03.553572   18249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:52:03.554137   18249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 16:52:03.555206   18249 out.go:352] Setting JSON to false
	I0828 16:52:03.556015   18249 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2070,"bootTime":1724861854,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 16:52:03.556070   18249 start.go:139] virtualization: kvm guest
	I0828 16:52:03.557879   18249 out.go:177] * [addons-990097] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 16:52:03.559933   18249 notify.go:220] Checking for updates...
	I0828 16:52:03.559948   18249 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 16:52:03.561141   18249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 16:52:03.562248   18249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 16:52:03.563381   18249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 16:52:03.564522   18249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 16:52:03.565685   18249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 16:52:03.567058   18249 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 16:52:03.598505   18249 out.go:177] * Using the kvm2 driver based on user configuration
	I0828 16:52:03.599805   18249 start.go:297] selected driver: kvm2
	I0828 16:52:03.599821   18249 start.go:901] validating driver "kvm2" against <nil>
	I0828 16:52:03.599832   18249 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 16:52:03.600482   18249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 16:52:03.600546   18249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 16:52:03.615718   18249 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 16:52:03.615767   18249 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 16:52:03.616004   18249 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 16:52:03.616072   18249 cni.go:84] Creating CNI manager for ""
	I0828 16:52:03.616089   18249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 16:52:03.616099   18249 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 16:52:03.616172   18249 start.go:340] cluster config:
	{Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:52:03.616295   18249 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 16:52:03.618096   18249 out.go:177] * Starting "addons-990097" primary control-plane node in "addons-990097" cluster
	I0828 16:52:03.619317   18249 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 16:52:03.619368   18249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 16:52:03.619389   18249 cache.go:56] Caching tarball of preloaded images
	I0828 16:52:03.619481   18249 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 16:52:03.619495   18249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 16:52:03.619843   18249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/config.json ...
	I0828 16:52:03.619867   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/config.json: {Name:mk1d9cf08f8bf0b3aa1979f7c4b7b4ba59401421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:03.620021   18249 start.go:360] acquireMachinesLock for addons-990097: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 16:52:03.620070   18249 start.go:364] duration metric: took 34.81µs to acquireMachinesLock for "addons-990097"
	I0828 16:52:03.620088   18249 start.go:93] Provisioning new machine with config: &{Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 16:52:03.620159   18249 start.go:125] createHost starting for "" (driver="kvm2")
	I0828 16:52:03.622720   18249 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0828 16:52:03.622873   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:03.622908   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:03.637096   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I0828 16:52:03.637576   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:03.638135   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:03.638159   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:03.638519   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:03.638728   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:03.638904   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:03.639054   18249 start.go:159] libmachine.API.Create for "addons-990097" (driver="kvm2")
	I0828 16:52:03.639083   18249 client.go:168] LocalClient.Create starting
	I0828 16:52:03.639131   18249 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 16:52:03.706793   18249 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 16:52:04.040558   18249 main.go:141] libmachine: Running pre-create checks...
	I0828 16:52:04.040580   18249 main.go:141] libmachine: (addons-990097) Calling .PreCreateCheck
	I0828 16:52:04.041083   18249 main.go:141] libmachine: (addons-990097) Calling .GetConfigRaw
	I0828 16:52:04.041464   18249 main.go:141] libmachine: Creating machine...
	I0828 16:52:04.041477   18249 main.go:141] libmachine: (addons-990097) Calling .Create
	I0828 16:52:04.041686   18249 main.go:141] libmachine: (addons-990097) Creating KVM machine...
	I0828 16:52:04.042940   18249 main.go:141] libmachine: (addons-990097) DBG | found existing default KVM network
	I0828 16:52:04.043689   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.043534   18271 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0828 16:52:04.043707   18249 main.go:141] libmachine: (addons-990097) DBG | created network xml: 
	I0828 16:52:04.043719   18249 main.go:141] libmachine: (addons-990097) DBG | <network>
	I0828 16:52:04.043734   18249 main.go:141] libmachine: (addons-990097) DBG |   <name>mk-addons-990097</name>
	I0828 16:52:04.043744   18249 main.go:141] libmachine: (addons-990097) DBG |   <dns enable='no'/>
	I0828 16:52:04.043754   18249 main.go:141] libmachine: (addons-990097) DBG |   
	I0828 16:52:04.043761   18249 main.go:141] libmachine: (addons-990097) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0828 16:52:04.043768   18249 main.go:141] libmachine: (addons-990097) DBG |     <dhcp>
	I0828 16:52:04.043774   18249 main.go:141] libmachine: (addons-990097) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0828 16:52:04.043781   18249 main.go:141] libmachine: (addons-990097) DBG |     </dhcp>
	I0828 16:52:04.043787   18249 main.go:141] libmachine: (addons-990097) DBG |   </ip>
	I0828 16:52:04.043797   18249 main.go:141] libmachine: (addons-990097) DBG |   
	I0828 16:52:04.043808   18249 main.go:141] libmachine: (addons-990097) DBG | </network>
	I0828 16:52:04.043821   18249 main.go:141] libmachine: (addons-990097) DBG | 
	I0828 16:52:04.048764   18249 main.go:141] libmachine: (addons-990097) DBG | trying to create private KVM network mk-addons-990097 192.168.39.0/24...
	I0828 16:52:04.113488   18249 main.go:141] libmachine: (addons-990097) DBG | private KVM network mk-addons-990097 192.168.39.0/24 created
	I0828 16:52:04.113513   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.113440   18271 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 16:52:04.113526   18249 main.go:141] libmachine: (addons-990097) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097 ...
	I0828 16:52:04.113543   18249 main.go:141] libmachine: (addons-990097) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 16:52:04.113618   18249 main.go:141] libmachine: (addons-990097) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 16:52:04.371432   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.371337   18271 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa...
	I0828 16:52:04.533443   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.533306   18271 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/addons-990097.rawdisk...
	I0828 16:52:04.533482   18249 main.go:141] libmachine: (addons-990097) DBG | Writing magic tar header
	I0828 16:52:04.533524   18249 main.go:141] libmachine: (addons-990097) DBG | Writing SSH key tar header
	I0828 16:52:04.533569   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.533458   18271 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097 ...
	I0828 16:52:04.533617   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097
	I0828 16:52:04.533642   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097 (perms=drwx------)
	I0828 16:52:04.533657   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 16:52:04.533672   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 16:52:04.533690   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 16:52:04.533705   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 16:52:04.533713   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins
	I0828 16:52:04.533724   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 16:52:04.533737   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home
	I0828 16:52:04.533748   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 16:52:04.533762   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 16:52:04.533774   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 16:52:04.533786   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 16:52:04.533798   18249 main.go:141] libmachine: (addons-990097) DBG | Skipping /home - not owner
	I0828 16:52:04.533808   18249 main.go:141] libmachine: (addons-990097) Creating domain...
	I0828 16:52:04.535453   18249 main.go:141] libmachine: (addons-990097) define libvirt domain using xml: 
	I0828 16:52:04.535472   18249 main.go:141] libmachine: (addons-990097) <domain type='kvm'>
	I0828 16:52:04.535482   18249 main.go:141] libmachine: (addons-990097)   <name>addons-990097</name>
	I0828 16:52:04.535497   18249 main.go:141] libmachine: (addons-990097)   <memory unit='MiB'>4000</memory>
	I0828 16:52:04.535505   18249 main.go:141] libmachine: (addons-990097)   <vcpu>2</vcpu>
	I0828 16:52:04.535513   18249 main.go:141] libmachine: (addons-990097)   <features>
	I0828 16:52:04.535525   18249 main.go:141] libmachine: (addons-990097)     <acpi/>
	I0828 16:52:04.535533   18249 main.go:141] libmachine: (addons-990097)     <apic/>
	I0828 16:52:04.535543   18249 main.go:141] libmachine: (addons-990097)     <pae/>
	I0828 16:52:04.535552   18249 main.go:141] libmachine: (addons-990097)     
	I0828 16:52:04.535560   18249 main.go:141] libmachine: (addons-990097)   </features>
	I0828 16:52:04.535573   18249 main.go:141] libmachine: (addons-990097)   <cpu mode='host-passthrough'>
	I0828 16:52:04.535578   18249 main.go:141] libmachine: (addons-990097)   
	I0828 16:52:04.535587   18249 main.go:141] libmachine: (addons-990097)   </cpu>
	I0828 16:52:04.535595   18249 main.go:141] libmachine: (addons-990097)   <os>
	I0828 16:52:04.535599   18249 main.go:141] libmachine: (addons-990097)     <type>hvm</type>
	I0828 16:52:04.535605   18249 main.go:141] libmachine: (addons-990097)     <boot dev='cdrom'/>
	I0828 16:52:04.535610   18249 main.go:141] libmachine: (addons-990097)     <boot dev='hd'/>
	I0828 16:52:04.535620   18249 main.go:141] libmachine: (addons-990097)     <bootmenu enable='no'/>
	I0828 16:52:04.535627   18249 main.go:141] libmachine: (addons-990097)   </os>
	I0828 16:52:04.535632   18249 main.go:141] libmachine: (addons-990097)   <devices>
	I0828 16:52:04.535640   18249 main.go:141] libmachine: (addons-990097)     <disk type='file' device='cdrom'>
	I0828 16:52:04.535673   18249 main.go:141] libmachine: (addons-990097)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/boot2docker.iso'/>
	I0828 16:52:04.535696   18249 main.go:141] libmachine: (addons-990097)       <target dev='hdc' bus='scsi'/>
	I0828 16:52:04.535707   18249 main.go:141] libmachine: (addons-990097)       <readonly/>
	I0828 16:52:04.535719   18249 main.go:141] libmachine: (addons-990097)     </disk>
	I0828 16:52:04.535743   18249 main.go:141] libmachine: (addons-990097)     <disk type='file' device='disk'>
	I0828 16:52:04.535766   18249 main.go:141] libmachine: (addons-990097)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 16:52:04.535787   18249 main.go:141] libmachine: (addons-990097)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/addons-990097.rawdisk'/>
	I0828 16:52:04.535800   18249 main.go:141] libmachine: (addons-990097)       <target dev='hda' bus='virtio'/>
	I0828 16:52:04.535809   18249 main.go:141] libmachine: (addons-990097)     </disk>
	I0828 16:52:04.535822   18249 main.go:141] libmachine: (addons-990097)     <interface type='network'>
	I0828 16:52:04.535834   18249 main.go:141] libmachine: (addons-990097)       <source network='mk-addons-990097'/>
	I0828 16:52:04.535847   18249 main.go:141] libmachine: (addons-990097)       <model type='virtio'/>
	I0828 16:52:04.535857   18249 main.go:141] libmachine: (addons-990097)     </interface>
	I0828 16:52:04.535873   18249 main.go:141] libmachine: (addons-990097)     <interface type='network'>
	I0828 16:52:04.535886   18249 main.go:141] libmachine: (addons-990097)       <source network='default'/>
	I0828 16:52:04.535900   18249 main.go:141] libmachine: (addons-990097)       <model type='virtio'/>
	I0828 16:52:04.535911   18249 main.go:141] libmachine: (addons-990097)     </interface>
	I0828 16:52:04.535920   18249 main.go:141] libmachine: (addons-990097)     <serial type='pty'>
	I0828 16:52:04.535932   18249 main.go:141] libmachine: (addons-990097)       <target port='0'/>
	I0828 16:52:04.535942   18249 main.go:141] libmachine: (addons-990097)     </serial>
	I0828 16:52:04.535953   18249 main.go:141] libmachine: (addons-990097)     <console type='pty'>
	I0828 16:52:04.535965   18249 main.go:141] libmachine: (addons-990097)       <target type='serial' port='0'/>
	I0828 16:52:04.535984   18249 main.go:141] libmachine: (addons-990097)     </console>
	I0828 16:52:04.536000   18249 main.go:141] libmachine: (addons-990097)     <rng model='virtio'>
	I0828 16:52:04.536015   18249 main.go:141] libmachine: (addons-990097)       <backend model='random'>/dev/random</backend>
	I0828 16:52:04.536025   18249 main.go:141] libmachine: (addons-990097)     </rng>
	I0828 16:52:04.536033   18249 main.go:141] libmachine: (addons-990097)     
	I0828 16:52:04.536041   18249 main.go:141] libmachine: (addons-990097)     
	I0828 16:52:04.536047   18249 main.go:141] libmachine: (addons-990097)   </devices>
	I0828 16:52:04.536052   18249 main.go:141] libmachine: (addons-990097) </domain>
	I0828 16:52:04.536066   18249 main.go:141] libmachine: (addons-990097) 
	I0828 16:52:04.542000   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:8a:92:29 in network default
	I0828 16:52:04.542553   18249 main.go:141] libmachine: (addons-990097) Ensuring networks are active...
	I0828 16:52:04.542572   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:04.543276   18249 main.go:141] libmachine: (addons-990097) Ensuring network default is active
	I0828 16:52:04.543557   18249 main.go:141] libmachine: (addons-990097) Ensuring network mk-addons-990097 is active
	I0828 16:52:04.544054   18249 main.go:141] libmachine: (addons-990097) Getting domain xml...
	I0828 16:52:04.544739   18249 main.go:141] libmachine: (addons-990097) Creating domain...
	I0828 16:52:05.926909   18249 main.go:141] libmachine: (addons-990097) Waiting to get IP...
	I0828 16:52:05.927895   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:05.928293   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:05.928329   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:05.928275   18271 retry.go:31] will retry after 307.43588ms: waiting for machine to come up
	I0828 16:52:06.237778   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:06.238168   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:06.238197   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:06.238118   18271 retry.go:31] will retry after 239.740862ms: waiting for machine to come up
	I0828 16:52:06.479526   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:06.479888   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:06.479911   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:06.479872   18271 retry.go:31] will retry after 313.269043ms: waiting for machine to come up
	I0828 16:52:06.794296   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:06.794785   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:06.794809   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:06.794738   18271 retry.go:31] will retry after 569.173838ms: waiting for machine to come up
	I0828 16:52:07.365385   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:07.365805   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:07.365854   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:07.365801   18271 retry.go:31] will retry after 528.42487ms: waiting for machine to come up
	I0828 16:52:07.896190   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:07.896616   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:07.896641   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:07.896567   18271 retry.go:31] will retry after 860.364887ms: waiting for machine to come up
	I0828 16:52:08.758007   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:08.758436   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:08.758461   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:08.758398   18271 retry.go:31] will retry after 735.816889ms: waiting for machine to come up
	I0828 16:52:09.496298   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:09.496737   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:09.496767   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:09.496707   18271 retry.go:31] will retry after 1.098370398s: waiting for machine to come up
	I0828 16:52:10.596985   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:10.597408   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:10.597437   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:10.597359   18271 retry.go:31] will retry after 1.834335212s: waiting for machine to come up
	I0828 16:52:12.434290   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:12.434611   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:12.434633   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:12.434571   18271 retry.go:31] will retry after 2.041065784s: waiting for machine to come up
	I0828 16:52:14.477426   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:14.477916   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:14.477948   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:14.477861   18271 retry.go:31] will retry after 1.984370117s: waiting for machine to come up
	I0828 16:52:16.464891   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:16.465274   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:16.465295   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:16.465230   18271 retry.go:31] will retry after 3.029154804s: waiting for machine to come up
	I0828 16:52:19.496261   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:19.496603   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:19.496625   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:19.496589   18271 retry.go:31] will retry after 3.151315591s: waiting for machine to come up
	I0828 16:52:22.651764   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:22.652112   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:22.652134   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:22.652073   18271 retry.go:31] will retry after 4.012346275s: waiting for machine to come up
	I0828 16:52:26.667962   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:26.668404   18249 main.go:141] libmachine: (addons-990097) Found IP for machine: 192.168.39.195
	I0828 16:52:26.668422   18249 main.go:141] libmachine: (addons-990097) Reserving static IP address...
	I0828 16:52:26.668433   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has current primary IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:26.668824   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find host DHCP lease matching {name: "addons-990097", mac: "52:54:00:36:9e:33", ip: "192.168.39.195"} in network mk-addons-990097
	I0828 16:52:26.740976   18249 main.go:141] libmachine: (addons-990097) DBG | Getting to WaitForSSH function...
	I0828 16:52:26.741009   18249 main.go:141] libmachine: (addons-990097) Reserved static IP address: 192.168.39.195
	I0828 16:52:26.741023   18249 main.go:141] libmachine: (addons-990097) Waiting for SSH to be available...
	I0828 16:52:26.743441   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:26.743738   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097
	I0828 16:52:26.743775   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find defined IP address of network mk-addons-990097 interface with MAC address 52:54:00:36:9e:33
	I0828 16:52:26.743951   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH client type: external
	I0828 16:52:26.743968   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa (-rw-------)
	I0828 16:52:26.743999   18249 main.go:141] libmachine: (addons-990097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 16:52:26.744010   18249 main.go:141] libmachine: (addons-990097) DBG | About to run SSH command:
	I0828 16:52:26.744026   18249 main.go:141] libmachine: (addons-990097) DBG | exit 0
	I0828 16:52:26.754106   18249 main.go:141] libmachine: (addons-990097) DBG | SSH cmd err, output: exit status 255: 
	I0828 16:52:26.754130   18249 main.go:141] libmachine: (addons-990097) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0828 16:52:26.754137   18249 main.go:141] libmachine: (addons-990097) DBG | command : exit 0
	I0828 16:52:26.754143   18249 main.go:141] libmachine: (addons-990097) DBG | err     : exit status 255
	I0828 16:52:26.754151   18249 main.go:141] libmachine: (addons-990097) DBG | output  : 
	I0828 16:52:29.754760   18249 main.go:141] libmachine: (addons-990097) DBG | Getting to WaitForSSH function...
	I0828 16:52:29.757068   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.757372   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:29.757400   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.757503   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH client type: external
	I0828 16:52:29.757540   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa (-rw-------)
	I0828 16:52:29.757562   18249 main.go:141] libmachine: (addons-990097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 16:52:29.757572   18249 main.go:141] libmachine: (addons-990097) DBG | About to run SSH command:
	I0828 16:52:29.757582   18249 main.go:141] libmachine: (addons-990097) DBG | exit 0
	I0828 16:52:29.877937   18249 main.go:141] libmachine: (addons-990097) DBG | SSH cmd err, output: <nil>: 
	I0828 16:52:29.878225   18249 main.go:141] libmachine: (addons-990097) KVM machine creation complete!
	I0828 16:52:29.878543   18249 main.go:141] libmachine: (addons-990097) Calling .GetConfigRaw
	I0828 16:52:29.879088   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:29.879264   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:29.879423   18249 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 16:52:29.879439   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:29.880692   18249 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 16:52:29.880710   18249 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 16:52:29.880719   18249 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 16:52:29.880732   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:29.882838   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.883224   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:29.883254   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.883344   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:29.883507   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.883658   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.883823   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:29.884002   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:29.884174   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:29.884185   18249 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 16:52:29.985509   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 16:52:29.985528   18249 main.go:141] libmachine: Detecting the provisioner...
	I0828 16:52:29.985535   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:29.988176   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.988502   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:29.988544   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.988718   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:29.988926   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.989088   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.989208   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:29.989336   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:29.989559   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:29.989571   18249 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 16:52:30.090732   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 16:52:30.090818   18249 main.go:141] libmachine: found compatible host: buildroot
	I0828 16:52:30.090830   18249 main.go:141] libmachine: Provisioning with buildroot...
	I0828 16:52:30.090838   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:30.091074   18249 buildroot.go:166] provisioning hostname "addons-990097"
	I0828 16:52:30.091095   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:30.091265   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.094119   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.094571   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.094674   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.094784   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.094970   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.095160   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.095304   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.095507   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.095700   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.095717   18249 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-990097 && echo "addons-990097" | sudo tee /etc/hostname
	I0828 16:52:30.212118   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-990097
	
	I0828 16:52:30.212145   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.214848   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.215331   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.215363   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.215707   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.215913   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.216104   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.216244   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.216447   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.216630   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.216653   18249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-990097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-990097/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-990097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 16:52:30.326941   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 16:52:30.326969   18249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 16:52:30.326993   18249 buildroot.go:174] setting up certificates
	I0828 16:52:30.327005   18249 provision.go:84] configureAuth start
	I0828 16:52:30.327014   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:30.327328   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:30.330236   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.330668   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.330698   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.330848   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.332951   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.333214   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.333255   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.333377   18249 provision.go:143] copyHostCerts
	I0828 16:52:30.333453   18249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 16:52:30.333574   18249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 16:52:30.333649   18249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 16:52:30.333709   18249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.addons-990097 san=[127.0.0.1 192.168.39.195 addons-990097 localhost minikube]
	I0828 16:52:30.457282   18249 provision.go:177] copyRemoteCerts
	I0828 16:52:30.457342   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 16:52:30.457365   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.460211   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.460550   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.460584   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.460756   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.460951   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.461115   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.461336   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:30.544126   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 16:52:30.567154   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0828 16:52:30.592366   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 16:52:30.617237   18249 provision.go:87] duration metric: took 290.219862ms to configureAuth
	I0828 16:52:30.617267   18249 buildroot.go:189] setting minikube options for container-runtime
	I0828 16:52:30.617448   18249 config.go:182] Loaded profile config "addons-990097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 16:52:30.617548   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.619914   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.620221   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.620254   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.620425   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.620640   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.620783   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.620914   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.621107   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.621256   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.621270   18249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 16:52:30.848003   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 16:52:30.848031   18249 main.go:141] libmachine: Checking connection to Docker...
	I0828 16:52:30.848042   18249 main.go:141] libmachine: (addons-990097) Calling .GetURL
	I0828 16:52:30.849229   18249 main.go:141] libmachine: (addons-990097) DBG | Using libvirt version 6000000
	I0828 16:52:30.851198   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.851502   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.851525   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.851678   18249 main.go:141] libmachine: Docker is up and running!
	I0828 16:52:30.851690   18249 main.go:141] libmachine: Reticulating splines...
	I0828 16:52:30.851696   18249 client.go:171] duration metric: took 27.21260345s to LocalClient.Create
	I0828 16:52:30.851716   18249 start.go:167] duration metric: took 27.212664809s to libmachine.API.Create "addons-990097"
	I0828 16:52:30.851725   18249 start.go:293] postStartSetup for "addons-990097" (driver="kvm2")
	I0828 16:52:30.851734   18249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 16:52:30.851750   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:30.851973   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 16:52:30.851995   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.853964   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.854285   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.854301   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.854478   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.854647   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.854805   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.854935   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:30.935753   18249 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 16:52:30.939610   18249 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 16:52:30.939637   18249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 16:52:30.939732   18249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 16:52:30.939770   18249 start.go:296] duration metric: took 88.03849ms for postStartSetup
	I0828 16:52:30.939814   18249 main.go:141] libmachine: (addons-990097) Calling .GetConfigRaw
	I0828 16:52:30.940381   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:30.942790   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.943103   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.943132   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.943312   18249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/config.json ...
	I0828 16:52:30.943514   18249 start.go:128] duration metric: took 27.323344868s to createHost
	I0828 16:52:30.943546   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.945603   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.945953   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.945978   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.946156   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.946323   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.946607   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.946786   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.946957   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.947128   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.947143   18249 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 16:52:31.050660   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724863951.031106642
	
	I0828 16:52:31.050686   18249 fix.go:216] guest clock: 1724863951.031106642
	I0828 16:52:31.050696   18249 fix.go:229] Guest: 2024-08-28 16:52:31.031106642 +0000 UTC Remote: 2024-08-28 16:52:30.943527716 +0000 UTC m=+27.423947828 (delta=87.578926ms)
	I0828 16:52:31.050749   18249 fix.go:200] guest clock delta is within tolerance: 87.578926ms
	I0828 16:52:31.050759   18249 start.go:83] releasing machines lock for "addons-990097", held for 27.430678011s
	I0828 16:52:31.050790   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.051040   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:31.053422   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.053797   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:31.053831   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.053954   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.054408   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.054525   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.054615   18249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 16:52:31.054667   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:31.054710   18249 ssh_runner.go:195] Run: cat /version.json
	I0828 16:52:31.054729   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:31.057139   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057472   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057561   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:31.057604   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057752   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:31.057882   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:31.057908   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057911   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:31.058061   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:31.058069   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:31.058230   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:31.058334   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:31.058301   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:31.058460   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:31.130356   18249 ssh_runner.go:195] Run: systemctl --version
	I0828 16:52:31.176423   18249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 16:52:31.331223   18249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 16:52:31.337047   18249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 16:52:31.337126   18249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 16:52:31.352067   18249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 16:52:31.352090   18249 start.go:495] detecting cgroup driver to use...
	I0828 16:52:31.352154   18249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 16:52:31.366292   18249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 16:52:31.378875   18249 docker.go:217] disabling cri-docker service (if available) ...
	I0828 16:52:31.378945   18249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 16:52:31.391391   18249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 16:52:31.403829   18249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 16:52:31.515593   18249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 16:52:31.658525   18249 docker.go:233] disabling docker service ...
	I0828 16:52:31.658598   18249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 16:52:31.672788   18249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 16:52:31.684923   18249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 16:52:31.832671   18249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 16:52:31.955950   18249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 16:52:31.968509   18249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 16:52:31.985170   18249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 16:52:31.985222   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:31.994290   18249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 16:52:31.994356   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.003644   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.012976   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.022206   18249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 16:52:32.031981   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.041468   18249 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.056996   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.066128   18249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 16:52:32.074610   18249 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 16:52:32.074673   18249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 16:52:32.086779   18249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 16:52:32.095844   18249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:32.217079   18249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 16:52:32.305084   18249 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 16:52:32.305166   18249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 16:52:32.309450   18249 start.go:563] Will wait 60s for crictl version
	I0828 16:52:32.309525   18249 ssh_runner.go:195] Run: which crictl
	I0828 16:52:32.312948   18249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 16:52:32.349653   18249 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 16:52:32.349768   18249 ssh_runner.go:195] Run: crio --version
	I0828 16:52:32.374953   18249 ssh_runner.go:195] Run: crio --version
	I0828 16:52:32.403065   18249 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 16:52:32.404404   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:32.406839   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:32.407142   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:32.407172   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:32.407345   18249 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 16:52:32.411258   18249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 16:52:32.422553   18249 kubeadm.go:883] updating cluster {Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 16:52:32.422662   18249 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 16:52:32.422725   18249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 16:52:32.452295   18249 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 16:52:32.452389   18249 ssh_runner.go:195] Run: which lz4
	I0828 16:52:32.455957   18249 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 16:52:32.459683   18249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 16:52:32.459715   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 16:52:33.619457   18249 crio.go:462] duration metric: took 1.163529047s to copy over tarball
	I0828 16:52:33.619537   18249 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 16:52:35.728451   18249 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.108883425s)
	I0828 16:52:35.728489   18249 crio.go:469] duration metric: took 2.108993771s to extract the tarball
	I0828 16:52:35.728498   18249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 16:52:35.764177   18249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 16:52:35.805986   18249 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 16:52:35.806013   18249 cache_images.go:84] Images are preloaded, skipping loading
	I0828 16:52:35.806024   18249 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.31.0 crio true true} ...
	I0828 16:52:35.806169   18249 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-990097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 16:52:35.806256   18249 ssh_runner.go:195] Run: crio config
	I0828 16:52:35.847424   18249 cni.go:84] Creating CNI manager for ""
	I0828 16:52:35.847444   18249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 16:52:35.847453   18249 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 16:52:35.847477   18249 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-990097 NodeName:addons-990097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 16:52:35.847617   18249 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-990097"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 16:52:35.847688   18249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 16:52:35.857307   18249 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 16:52:35.857386   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 16:52:35.866414   18249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0828 16:52:35.882622   18249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 16:52:35.898146   18249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0828 16:52:35.913810   18249 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I0828 16:52:35.917387   18249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 16:52:35.928840   18249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:36.068112   18249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 16:52:36.084575   18249 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097 for IP: 192.168.39.195
	I0828 16:52:36.084599   18249 certs.go:194] generating shared ca certs ...
	I0828 16:52:36.084619   18249 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.084764   18249 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 16:52:36.178723   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt ...
	I0828 16:52:36.178750   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt: {Name:mkca0e9fa435263e5e1973904de7411404a3b5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.178894   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key ...
	I0828 16:52:36.178904   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key: {Name:mke8d9e9bf1fb5b7a824f6128a8a0000adba5a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.178971   18249 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 16:52:36.394826   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt ...
	I0828 16:52:36.394851   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt: {Name:mk69004c7e13f3376a06f0abafef4bde08b0d3e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.395002   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key ...
	I0828 16:52:36.395013   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key: {Name:mk5411c4aa0dbd29b19b8133f87fa65318c7ad4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.395070   18249 certs.go:256] generating profile certs ...
	I0828 16:52:36.395115   18249 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.key
	I0828 16:52:36.395137   18249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt with IP's: []
	I0828 16:52:36.439668   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt ...
	I0828 16:52:36.439694   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: {Name:mk453035261c38191e0ffde93aa6fa8d406cfb43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.439845   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.key ...
	I0828 16:52:36.439856   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.key: {Name:mkb125df58df3f8011bf26153ac05fdbffab3c48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.439917   18249 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd
	I0828 16:52:36.439934   18249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I0828 16:52:36.539648   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd ...
	I0828 16:52:36.539677   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd: {Name:mk71f54c0b4de61e9c2536a122a940b588dc9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.539818   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd ...
	I0828 16:52:36.539830   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd: {Name:mk45632fbbb3bbcb64891cfc4bf3dbd6f6b7d794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.539890   18249 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt
	I0828 16:52:36.539962   18249 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key
	I0828 16:52:36.540013   18249 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key
	I0828 16:52:36.540031   18249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt with IP's: []
	I0828 16:52:36.667048   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt ...
	I0828 16:52:36.667076   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt: {Name:mkd4b5d49bf60b646d45ef076f74b004c8164a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.667220   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key ...
	I0828 16:52:36.667230   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key: {Name:mk17bb6cc5d80faf4d912b3341e01d7aaac69711 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.667389   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 16:52:36.667426   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 16:52:36.667452   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 16:52:36.667474   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 16:52:36.668075   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 16:52:36.690924   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 16:52:36.712111   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 16:52:36.733708   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 16:52:36.764815   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0828 16:52:36.792036   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 16:52:36.815658   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 16:52:36.836525   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 16:52:36.857449   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 16:52:36.878273   18249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 16:52:36.893346   18249 ssh_runner.go:195] Run: openssl version
	I0828 16:52:36.899004   18249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 16:52:36.909101   18249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:36.913722   18249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:36.913785   18249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:36.919726   18249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 16:52:36.930086   18249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 16:52:36.933924   18249 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 16:52:36.933973   18249 kubeadm.go:392] StartCluster: {Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:52:36.934057   18249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 16:52:36.934128   18249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 16:52:36.968169   18249 cri.go:89] found id: ""
	I0828 16:52:36.968234   18249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 16:52:36.977317   18249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 16:52:36.985866   18249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 16:52:36.994431   18249 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 16:52:36.994459   18249 kubeadm.go:157] found existing configuration files:
	
	I0828 16:52:36.994509   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 16:52:37.004030   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 16:52:37.004090   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 16:52:37.012639   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 16:52:37.020830   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 16:52:37.020889   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 16:52:37.029469   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 16:52:37.037402   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 16:52:37.037462   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 16:52:37.045618   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 16:52:37.053640   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 16:52:37.053694   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 16:52:37.061952   18249 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 16:52:37.112124   18249 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 16:52:37.112242   18249 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 16:52:37.208201   18249 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 16:52:37.208348   18249 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 16:52:37.208461   18249 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 16:52:37.215232   18249 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 16:52:37.218733   18249 out.go:235]   - Generating certificates and keys ...
	I0828 16:52:37.218826   18249 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 16:52:37.219027   18249 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 16:52:37.494799   18249 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 16:52:37.692765   18249 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 16:52:37.856293   18249 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 16:52:38.009127   18249 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 16:52:38.187901   18249 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 16:52:38.188087   18249 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-990097 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0828 16:52:38.477231   18249 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 16:52:38.477411   18249 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-990097 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0828 16:52:38.539600   18249 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 16:52:39.008399   18249 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 16:52:39.328471   18249 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 16:52:39.328600   18249 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 16:52:39.560006   18249 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 16:52:39.701891   18249 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 16:52:39.854713   18249 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 16:52:39.961910   18249 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 16:52:40.053380   18249 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 16:52:40.053922   18249 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 16:52:40.056435   18249 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 16:52:40.058106   18249 out.go:235]   - Booting up control plane ...
	I0828 16:52:40.058200   18249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 16:52:40.058271   18249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 16:52:40.058614   18249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 16:52:40.072832   18249 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 16:52:40.080336   18249 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 16:52:40.080381   18249 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 16:52:40.199027   18249 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 16:52:40.199152   18249 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 16:52:40.701214   18249 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.364407ms
	I0828 16:52:40.701332   18249 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 16:52:45.701403   18249 kubeadm.go:310] [api-check] The API server is healthy after 5.001374073s
	I0828 16:52:45.711899   18249 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 16:52:45.729058   18249 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 16:52:45.759777   18249 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 16:52:45.759972   18249 kubeadm.go:310] [mark-control-plane] Marking the node addons-990097 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 16:52:45.773435   18249 kubeadm.go:310] [bootstrap-token] Using token: m82lde.zyra1pfrkjoxeehr
	I0828 16:52:45.775077   18249 out.go:235]   - Configuring RBAC rules ...
	I0828 16:52:45.775231   18249 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 16:52:45.781955   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 16:52:45.791540   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 16:52:45.798883   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 16:52:45.803511   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 16:52:45.808700   18249 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 16:52:46.106541   18249 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 16:52:46.534310   18249 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 16:52:47.106029   18249 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 16:52:47.107541   18249 kubeadm.go:310] 
	I0828 16:52:47.107598   18249 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 16:52:47.107633   18249 kubeadm.go:310] 
	I0828 16:52:47.107764   18249 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 16:52:47.107778   18249 kubeadm.go:310] 
	I0828 16:52:47.107809   18249 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 16:52:47.107871   18249 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 16:52:47.107961   18249 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 16:52:47.107982   18249 kubeadm.go:310] 
	I0828 16:52:47.108056   18249 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 16:52:47.108065   18249 kubeadm.go:310] 
	I0828 16:52:47.108133   18249 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 16:52:47.108140   18249 kubeadm.go:310] 
	I0828 16:52:47.108179   18249 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 16:52:47.108239   18249 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 16:52:47.108335   18249 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 16:52:47.108350   18249 kubeadm.go:310] 
	I0828 16:52:47.108499   18249 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 16:52:47.108627   18249 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 16:52:47.108638   18249 kubeadm.go:310] 
	I0828 16:52:47.108765   18249 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m82lde.zyra1pfrkjoxeehr \
	I0828 16:52:47.108914   18249 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 16:52:47.108948   18249 kubeadm.go:310] 	--control-plane 
	I0828 16:52:47.108962   18249 kubeadm.go:310] 
	I0828 16:52:47.109095   18249 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 16:52:47.109106   18249 kubeadm.go:310] 
	I0828 16:52:47.109197   18249 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m82lde.zyra1pfrkjoxeehr \
	I0828 16:52:47.109291   18249 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 16:52:47.110506   18249 kubeadm.go:310] W0828 16:52:37.095179     808 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 16:52:47.110904   18249 kubeadm.go:310] W0828 16:52:37.096135     808 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 16:52:47.111022   18249 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 16:52:47.111047   18249 cni.go:84] Creating CNI manager for ""
	I0828 16:52:47.111061   18249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 16:52:47.113714   18249 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 16:52:47.114865   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 16:52:47.125045   18249 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 16:52:47.141868   18249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 16:52:47.141994   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:47.142013   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-990097 minikube.k8s.io/updated_at=2024_08_28T16_52_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=addons-990097 minikube.k8s.io/primary=true
	I0828 16:52:47.167583   18249 ops.go:34] apiserver oom_adj: -16
	I0828 16:52:47.253359   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:47.754084   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:48.254277   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:48.754104   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:49.254023   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:49.753456   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:50.254174   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:50.753691   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:51.254102   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:51.754161   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:51.862424   18249 kubeadm.go:1113] duration metric: took 4.720462069s to wait for elevateKubeSystemPrivileges
	I0828 16:52:51.862469   18249 kubeadm.go:394] duration metric: took 14.928497866s to StartCluster
	I0828 16:52:51.862492   18249 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:51.862622   18249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 16:52:51.863098   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:51.863295   18249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0828 16:52:51.863324   18249 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 16:52:51.863367   18249 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0828 16:52:51.863461   18249 addons.go:69] Setting default-storageclass=true in profile "addons-990097"
	I0828 16:52:51.863473   18249 addons.go:69] Setting registry=true in profile "addons-990097"
	I0828 16:52:51.863476   18249 addons.go:69] Setting metrics-server=true in profile "addons-990097"
	I0828 16:52:51.863499   18249 addons.go:234] Setting addon registry=true in "addons-990097"
	I0828 16:52:51.863492   18249 addons.go:69] Setting cloud-spanner=true in profile "addons-990097"
	I0828 16:52:51.863506   18249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-990097"
	I0828 16:52:51.863519   18249 addons.go:234] Setting addon metrics-server=true in "addons-990097"
	I0828 16:52:51.863529   18249 addons.go:234] Setting addon cloud-spanner=true in "addons-990097"
	I0828 16:52:51.863531   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863549   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863561   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863562   18249 config.go:182] Loaded profile config "addons-990097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 16:52:51.863607   18249 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-990097"
	I0828 16:52:51.863654   18249 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-990097"
	I0828 16:52:51.863678   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863908   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.863921   18249 addons.go:69] Setting ingress=true in profile "addons-990097"
	I0828 16:52:51.863926   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.863933   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.863938   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.863948   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.863953   18249 addons.go:234] Setting addon ingress=true in "addons-990097"
	I0828 16:52:51.863964   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.863982   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863460   18249 addons.go:69] Setting yakd=true in profile "addons-990097"
	I0828 16:52:51.864018   18249 addons.go:69] Setting ingress-dns=true in profile "addons-990097"
	I0828 16:52:51.864030   18249 addons.go:69] Setting storage-provisioner=true in profile "addons-990097"
	I0828 16:52:51.864038   18249 addons.go:234] Setting addon yakd=true in "addons-990097"
	I0828 16:52:51.864041   18249 addons.go:234] Setting addon ingress-dns=true in "addons-990097"
	I0828 16:52:51.864048   18249 addons.go:234] Setting addon storage-provisioner=true in "addons-990097"
	I0828 16:52:51.864050   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864057   18249 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-990097"
	I0828 16:52:51.864061   18249 addons.go:69] Setting gcp-auth=true in profile "addons-990097"
	I0828 16:52:51.864068   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864068   18249 addons.go:69] Setting helm-tiller=true in profile "addons-990097"
	I0828 16:52:51.864081   18249 mustload.go:65] Loading cluster: addons-990097
	I0828 16:52:51.864058   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864087   18249 addons.go:234] Setting addon helm-tiller=true in "addons-990097"
	I0828 16:52:51.864092   18249 addons.go:69] Setting volumesnapshots=true in profile "addons-990097"
	I0828 16:52:51.864087   18249 addons.go:69] Setting volcano=true in profile "addons-990097"
	I0828 16:52:51.864105   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864108   18249 addons.go:234] Setting addon volumesnapshots=true in "addons-990097"
	I0828 16:52:51.863468   18249 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-990097"
	I0828 16:52:51.864138   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864111   18249 addons.go:234] Setting addon volcano=true in "addons-990097"
	I0828 16:52:51.864171   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864297   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864336   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864434   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864142   18249 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-990097"
	I0828 16:52:51.864465   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864046   18249 addons.go:69] Setting inspektor-gadget=true in profile "addons-990097"
	I0828 16:52:51.864493   18249 addons.go:234] Setting addon inspektor-gadget=true in "addons-990097"
	I0828 16:52:51.864543   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864568   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864591   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864798   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864877   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864896   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864905   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864929   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864937   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864955   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864572   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864983   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864081   18249 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-990097"
	I0828 16:52:51.865148   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.865166   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.865240   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864545   18249 config.go:182] Loaded profile config "addons-990097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 16:52:51.865295   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.865352   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.865431   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.865521   18249 out.go:177] * Verifying Kubernetes components...
	I0828 16:52:51.867093   18249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:51.885199   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0828 16:52:51.885477   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0828 16:52:51.885492   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33793
	I0828 16:52:51.885750   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.885755   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0828 16:52:51.885989   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.886219   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.886558   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.886581   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.886580   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.886688   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.886708   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.886724   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.886737   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.887264   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887324   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887350   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.887362   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.887907   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.887933   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887944   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.887912   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887987   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.888013   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I0828 16:52:51.889234   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0828 16:52:51.890397   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890420   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890438   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.890452   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.890533   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890558   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.890684   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890713   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.891153   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.891189   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.892730   18249 addons.go:234] Setting addon default-storageclass=true in "addons-990097"
	I0828 16:52:51.892913   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.893285   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.893322   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.894924   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.894976   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.895458   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.895475   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.895521   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.895542   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.895836   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.895884   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.896367   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.896400   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.896408   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.896431   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.920845   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46817
	I0828 16:52:51.921517   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.922235   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.922257   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.922922   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.923553   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.923595   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.928048   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0828 16:52:51.928224   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0828 16:52:51.928543   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.928629   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.928995   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.929011   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.929139   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.929150   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.929913   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.930496   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.930519   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.930739   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0828 16:52:51.930776   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0828 16:52:51.931018   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.931228   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.931311   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.931596   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.931633   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.932148   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.932168   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.932316   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.932335   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.932583   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.932657   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.933177   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.933214   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.933573   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.934348   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
	I0828 16:52:51.934983   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.935496   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.935514   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.935540   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33497
	I0828 16:52:51.935941   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.936141   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.936211   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.936686   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.936702   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.937184   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.937607   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.937779   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.937810   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.938264   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.939007   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.939053   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.940198   18249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 16:52:51.941257   18249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:51.941275   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 16:52:51.941294   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.945245   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.945869   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.945889   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.946114   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.946297   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.946469   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.947368   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.948243   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42589
	I0828 16:52:51.948630   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.949142   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.949159   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.949494   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.949670   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.951300   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.953224   18249 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0828 16:52:51.954643   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 16:52:51.954663   18249 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 16:52:51.954691   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.958105   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.958534   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.958558   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.960564   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0828 16:52:51.960712   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43619
	I0828 16:52:51.960811   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.961092   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.961160   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0828 16:52:51.961463   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.961645   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.962144   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.962212   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.962501   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.962836   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.962852   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.962967   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.962980   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.963302   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.963364   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.963916   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.963951   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.964787   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.964813   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.966540   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.966566   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.966978   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.967204   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.969078   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.970741   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0828 16:52:51.971825   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0828 16:52:51.973044   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0828 16:52:51.973220   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I0828 16:52:51.973630   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.974106   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.974125   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.974525   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.974714   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.975169   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39457
	I0828 16:52:51.975891   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.976592   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.976607   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.976669   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.976985   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.977259   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.977312   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46789
	I0828 16:52:51.977724   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.978015   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0828 16:52:51.978118   18249 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0828 16:52:51.978190   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.978212   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.978519   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.978701   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.979345   18249 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0828 16:52:51.979360   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0828 16:52:51.979379   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.980554   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0828 16:52:51.980864   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.981133   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42397
	I0828 16:52:51.981800   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.982285   18249 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0828 16:52:51.982336   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0828 16:52:51.982805   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.982823   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.983085   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.983524   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.983649   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.983860   18249 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0828 16:52:51.983880   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0828 16:52:51.983898   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.984188   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.984214   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.984253   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.984424   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0828 16:52:51.984488   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.985059   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0828 16:52:51.986056   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0828 16:52:51.986113   18249 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0828 16:52:51.986133   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.986876   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.986944   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39637
	I0828 16:52:51.987233   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0828 16:52:51.987277   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.987408   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.988124   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.988172   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0828 16:52:51.988183   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0828 16:52:51.988201   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.988609   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.988624   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.989053   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.989096   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.989270   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.989445   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46785
	I0828 16:52:51.989794   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.989811   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.989923   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.990447   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.990496   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.990539   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0828 16:52:51.990826   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.990852   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.990950   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.990969   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.991161   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.991400   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.991419   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.991421   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.991402   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.991650   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.991758   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.991824   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.992071   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.992286   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.992541   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.992569   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:51.992585   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:51.992850   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.992917   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.992930   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.992959   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:51.992978   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:51.992986   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:51.992997   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:51.993004   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:51.993157   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:51.993194   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:51.993202   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:51.993228   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	W0828 16:52:51.993270   18249 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0828 16:52:51.993367   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.993502   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.993600   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.994634   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0828 16:52:51.994968   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.995300   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.995803   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.995829   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.996150   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.996660   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.996695   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.996700   18249 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0828 16:52:51.998323   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0828 16:52:51.998854   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.999173   18249 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0828 16:52:51.999191   18249 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0828 16:52:51.999209   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.999355   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.999375   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.999733   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.000029   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.000074   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I0828 16:52:52.000535   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.000555   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.000620   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.001158   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.001173   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.001242   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I0828 16:52:52.001533   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.001840   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.001919   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.002585   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.002779   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.003130   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:52.003158   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:52.003646   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.003664   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.003919   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.004173   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.004207   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.004302   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.004721   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.004745   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.004915   18249 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0828 16:52:52.005124   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.005449   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.005556   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I0828 16:52:52.005574   18249 out.go:177]   - Using image docker.io/registry:2.8.3
	I0828 16:52:52.005964   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.006575   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.006739   18249 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 16:52:52.006749   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0828 16:52:52.006762   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.007011   18249 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-990097"
	I0828 16:52:52.007047   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:52.007210   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.007223   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.007395   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:52.007635   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.007947   18249 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0828 16:52:52.008055   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.008153   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:52.008529   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.009065   18249 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0828 16:52:52.009079   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0828 16:52:52.009091   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.010799   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.011239   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.011257   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.011423   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.011668   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.011806   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.011928   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.012452   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.013295   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.013770   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.013865   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.013823   18249 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0828 16:52:52.013979   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.014280   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.014407   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.014585   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.015267   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0828 16:52:52.015319   18249 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0828 16:52:52.015347   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.018874   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43185
	I0828 16:52:52.019066   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.019382   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.019521   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.019539   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.019711   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.019861   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.020082   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.020241   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.020251   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.020261   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.020835   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.021022   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.021132   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0828 16:52:52.021489   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.022124   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.022148   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.022508   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.022715   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.022934   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.024013   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0828 16:52:52.024558   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.026046   18249 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0828 16:52:52.026047   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:52:52.027328   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:52:52.027344   18249 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 16:52:52.027383   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0828 16:52:52.027410   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.028651   18249 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 16:52:52.028667   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0828 16:52:52.028681   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.031130   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.031559   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.031573   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.031751   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.031908   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.032036   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.032165   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.032716   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.033159   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I0828 16:52:52.033303   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.033338   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.033379   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.033428   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I0828 16:52:52.033563   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.033754   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.033781   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.033785   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.034222   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.034240   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.034224   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.034253   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.034269   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.034593   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.034635   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.034793   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.035047   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:52.035083   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:52.036108   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.036365   18249 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:52.036381   18249 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 16:52:52.036396   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.039229   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.039626   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.039642   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.039793   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.039933   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.040034   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.040110   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	W0828 16:52:52.050832   18249 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52966->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.050853   18249 retry.go:31] will retry after 265.877478ms: ssh: handshake failed: read tcp 192.168.39.1:52966->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.065458   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0828 16:52:52.065895   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.066365   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.066389   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.066695   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.066934   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.068686   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.070267   18249 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0828 16:52:52.071813   18249 out.go:177]   - Using image docker.io/busybox:stable
	I0828 16:52:52.072975   18249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 16:52:52.073002   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0828 16:52:52.073024   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.076493   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.076991   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.077021   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.077115   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.077290   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.077439   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.077557   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	W0828 16:52:52.078345   18249 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52982->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.078374   18249 retry.go:31] will retry after 279.535479ms: ssh: handshake failed: read tcp 192.168.39.1:52982->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.457264   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0828 16:52:52.472106   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0828 16:52:52.472127   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0828 16:52:52.472898   18249 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0828 16:52:52.472911   18249 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0828 16:52:52.477184   18249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 16:52:52.477383   18249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0828 16:52:52.564015   18249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0828 16:52:52.564048   18249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0828 16:52:52.575756   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 16:52:52.575777   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0828 16:52:52.585811   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:52.590531   18249 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0828 16:52:52.590558   18249 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0828 16:52:52.594784   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 16:52:52.613114   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0828 16:52:52.613137   18249 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0828 16:52:52.618849   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 16:52:52.630514   18249 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0828 16:52:52.630548   18249 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0828 16:52:52.680492   18249 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0828 16:52:52.680511   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0828 16:52:52.683692   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 16:52:52.711921   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0828 16:52:52.711950   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0828 16:52:52.758918   18249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0828 16:52:52.758942   18249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0828 16:52:52.772563   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 16:52:52.772585   18249 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 16:52:52.783118   18249 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0828 16:52:52.783140   18249 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0828 16:52:52.784569   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:52.809813   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0828 16:52:52.809848   18249 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0828 16:52:52.826609   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0828 16:52:52.836825   18249 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0828 16:52:52.836855   18249 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0828 16:52:52.857767   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0828 16:52:52.857793   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0828 16:52:52.867452   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 16:52:52.903663   18249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0828 16:52:52.903735   18249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0828 16:52:52.914976   18249 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0828 16:52:52.914995   18249 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0828 16:52:52.980163   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 16:52:52.980191   18249 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 16:52:52.984211   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0828 16:52:52.984228   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0828 16:52:53.040803   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0828 16:52:53.040824   18249 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0828 16:52:53.043499   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0828 16:52:53.043517   18249 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0828 16:52:53.059538   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0828 16:52:53.066983   18249 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0828 16:52:53.067015   18249 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0828 16:52:53.136171   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0828 16:52:53.136204   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0828 16:52:53.144640   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 16:52:53.187366   18249 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:53.187394   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0828 16:52:53.212893   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0828 16:52:53.212913   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0828 16:52:53.235809   18249 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0828 16:52:53.235832   18249 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0828 16:52:53.288679   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0828 16:52:53.288698   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0828 16:52:53.385998   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:53.397529   18249 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0828 16:52:53.397559   18249 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0828 16:52:53.399651   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0828 16:52:53.466548   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0828 16:52:53.466578   18249 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0828 16:52:53.581666   18249 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 16:52:53.581691   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0828 16:52:53.691064   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0828 16:52:53.691083   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0828 16:52:53.853240   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 16:52:53.941644   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0828 16:52:53.941669   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0828 16:52:54.272844   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 16:52:54.272880   18249 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0828 16:52:54.495971   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 16:52:54.756169   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.298876818s)
	I0828 16:52:54.756225   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:54.756239   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:54.756244   18249 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.278834015s)
	I0828 16:52:54.756268   18249 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0828 16:52:54.756332   18249 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.279127216s)
	I0828 16:52:54.756551   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:54.756572   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:54.756589   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:54.756597   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:54.757015   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:54.757050   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:54.757059   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:54.757383   18249 node_ready.go:35] waiting up to 6m0s for node "addons-990097" to be "Ready" ...
	I0828 16:52:54.786124   18249 node_ready.go:49] node "addons-990097" has status "Ready":"True"
	I0828 16:52:54.786149   18249 node_ready.go:38] duration metric: took 28.747442ms for node "addons-990097" to be "Ready" ...
	I0828 16:52:54.786161   18249 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 16:52:54.827906   18249 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8gjc6" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:55.293839   18249 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-990097" context rescaled to 1 replicas
	I0828 16:52:55.917518   18249 pod_ready.go:93] pod "coredns-6f6b679f8f-8gjc6" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:55.917551   18249 pod_ready.go:82] duration metric: took 1.089601559s for pod "coredns-6f6b679f8f-8gjc6" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:55.917564   18249 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:57.075627   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.489775878s)
	I0828 16:52:57.075691   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:57.075706   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:57.075965   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:57.075988   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:57.075998   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:57.076007   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:57.077276   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:57.077308   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:57.077327   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:57.979995   18249 pod_ready.go:103] pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:59.035882   18249 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0828 16:52:59.035917   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:59.039427   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.039927   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:59.039958   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.040104   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:59.040296   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:59.040538   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:59.040737   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:59.280183   18249 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0828 16:52:59.327255   18249 addons.go:234] Setting addon gcp-auth=true in "addons-990097"
	I0828 16:52:59.327310   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:59.327726   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:59.327759   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:59.342823   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38311
	I0828 16:52:59.343340   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:59.343791   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:59.343813   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:59.344064   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:59.344682   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:59.344737   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:59.360102   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34741
	I0828 16:52:59.360990   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:59.361500   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:59.361519   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:59.361841   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:59.362016   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:59.363643   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:59.363866   18249 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0828 16:52:59.363888   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:59.366987   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.367482   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:59.367512   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.367772   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:59.367974   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:59.368154   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:59.368303   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:53:00.143087   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.524208814s)
	I0828 16:53:00.143133   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143143   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143179   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.459455766s)
	I0828 16:53:00.143218   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.358630374s)
	I0828 16:53:00.143225   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143234   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143237   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143245   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143279   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.316631996s)
	I0828 16:53:00.143308   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143320   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143325   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.275851965s)
	I0828 16:53:00.143341   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143349   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143439   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143454   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143465   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143477   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143588   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143601   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143603   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143610   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143622   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143642   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143669   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143673   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143678   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143680   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143686   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143689   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143693   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143697   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143705   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143736   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143743   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143875   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143971   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143990   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.144005   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.144007   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.144055   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.144079   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.144094   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.144037   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.144153   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.144353   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.144059   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.145188   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.145203   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.146141   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.146157   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.146170   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.146183   18249 addons.go:475] Verifying addon registry=true in "addons-990097"
	I0828 16:53:00.146206   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.086636827s)
	I0828 16:53:00.146315   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.001646973s)
	I0828 16:53:00.146337   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146350   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146459   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.760428919s)
	W0828 16:53:00.146488   18249 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 16:53:00.146500   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146507   18249 retry.go:31] will retry after 285.495702ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 16:53:00.146512   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146514   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.746824063s)
	I0828 16:53:00.146540   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.146545   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146550   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.146559   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146560   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146618   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146697   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.293422003s)
	I0828 16:53:00.146718   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146857   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147307   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147338   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147345   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147352   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.147359   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147366   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.552552261s)
	I0828 16:53:00.147388   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.147399   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147398   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147422   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147429   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147437   18249 addons.go:475] Verifying addon metrics-server=true in "addons-990097"
	I0828 16:53:00.147458   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147509   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147518   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147526   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.147534   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147545   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147554   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147758   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147784   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147790   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.148382   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.148394   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.148412   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.148414   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.148424   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.148431   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.148445   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.148453   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.148461   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.148468   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.148778   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.148815   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.148823   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.149053   18249 out.go:177] * Verifying registry addon...
	I0828 16:53:00.149082   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.149108   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.149116   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.149124   18249 addons.go:475] Verifying addon ingress=true in "addons-990097"
	I0828 16:53:00.149773   18249 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-990097 service yakd-dashboard -n yakd-dashboard
	
	I0828 16:53:00.151268   18249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0828 16:53:00.151296   18249 out.go:177] * Verifying ingress addon...
	I0828 16:53:00.153273   18249 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0828 16:53:00.166762   18249 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0828 16:53:00.166788   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:00.182165   18249 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0828 16:53:00.182192   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:00.189119   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.189137   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.189552   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.189574   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	W0828 16:53:00.189671   18249 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0828 16:53:00.192266   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.192288   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.192629   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.192650   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.192654   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.432806   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:53:00.441373   18249 pod_ready.go:103] pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:00.616225   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.120200896s)
	I0828 16:53:00.616277   18249 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.252381186s)
	I0828 16:53:00.616290   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.616306   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.616613   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.616635   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.616651   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.616659   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.616960   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.616974   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.616985   18249 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-990097"
	I0828 16:53:00.618208   18249 out.go:177] * Verifying csi-hostpath-driver addon...
	I0828 16:53:00.618221   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:53:00.620074   18249 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0828 16:53:00.620941   18249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0828 16:53:00.621479   18249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0828 16:53:00.621497   18249 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0828 16:53:00.649240   18249 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0828 16:53:00.649265   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:00.666906   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:00.666973   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:00.798819   18249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0828 16:53:00.798846   18249 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0828 16:53:00.965848   18249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 16:53:00.965868   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0828 16:53:01.096603   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 16:53:01.146627   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:01.246922   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:01.247289   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:01.625375   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:01.727635   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:01.728621   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:01.939675   18249 pod_ready.go:98] pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:53:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.195 HostIPs:[{IP:192.168.39
.195}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-28 16:52:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-28 16:52:55 +0000 UTC,FinishedAt:2024-08-28 16:53:00 +0000 UTC,ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37 Started:0xc0015a66a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000c10060} {Name:kube-api-access-gnbll MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000c10070}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0828 16:53:01.939706   18249 pod_ready.go:82] duration metric: took 6.022133006s for pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace to be "Ready" ...
	E0828 16:53:01.939721   18249 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:53:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.195 HostIPs:[{IP:192.168.39.195}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-28 16:52:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-28 16:52:55 +0000 UTC,FinishedAt:2024-08-28 16:53:00 +0000 UTC,ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37 Started:0xc0015a66a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000c10060} {Name:kube-api-access-gnbll MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc000c10070}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0828 16:53:01.939735   18249 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.947681   18249 pod_ready.go:93] pod "etcd-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.947709   18249 pod_ready.go:82] duration metric: took 7.961903ms for pod "etcd-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.947723   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.965179   18249 pod_ready.go:93] pod "kube-apiserver-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.965209   18249 pod_ready.go:82] duration metric: took 17.478027ms for pod "kube-apiserver-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.965223   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.975413   18249 pod_ready.go:93] pod "kube-controller-manager-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.975442   18249 pod_ready.go:82] duration metric: took 10.210377ms for pod "kube-controller-manager-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.975456   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8qj9l" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.989070   18249 pod_ready.go:93] pod "kube-proxy-8qj9l" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.989092   18249 pod_ready.go:82] duration metric: took 13.627304ms for pod "kube-proxy-8qj9l" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.989102   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:02.126567   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:02.155944   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:02.158684   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:02.322404   18249 pod_ready.go:93] pod "kube-scheduler-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:02.322427   18249 pod_ready.go:82] duration metric: took 333.317872ms for pod "kube-scheduler-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:02.322440   18249 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:02.474322   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.04146744s)
	I0828 16:53:02.474395   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.474415   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.474701   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.474716   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.474743   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:02.474804   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.474818   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.475006   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.475026   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.585160   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.48851671s)
	I0828 16:53:02.585206   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.585217   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.585499   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.585553   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.585584   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.585591   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:02.585596   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.585845   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.585864   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.585870   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:02.587678   18249 addons.go:475] Verifying addon gcp-auth=true in "addons-990097"
	I0828 16:53:02.589137   18249 out.go:177] * Verifying gcp-auth addon...
	I0828 16:53:02.590957   18249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0828 16:53:02.611253   18249 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 16:53:02.611280   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:02.625344   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:02.656451   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:02.659296   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:03.096111   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:03.127568   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:03.156535   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:03.158882   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:03.594789   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:03.625961   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:03.655530   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:03.656632   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:04.100416   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:04.202367   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:04.202567   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:04.202579   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:04.332466   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:04.594922   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:04.625960   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:04.654548   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:04.657398   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:05.095212   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:05.127010   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:05.154957   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:05.157414   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:05.600067   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:05.627331   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:05.655666   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:05.658371   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:06.095702   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:06.125685   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:06.166060   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:06.196174   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:06.595324   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:06.625617   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:06.654792   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:06.657272   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:06.827854   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:07.094934   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:07.126052   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:07.155943   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:07.157205   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:07.843759   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:07.843956   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:07.844210   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:07.845956   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:08.094558   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:08.126496   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:08.156387   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:08.158864   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:08.594938   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:08.625675   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:08.654652   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:08.658021   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:08.829775   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:09.095286   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:09.125697   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:09.156180   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:09.157544   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:09.593920   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:09.626336   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:09.655412   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:09.657265   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:10.095098   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:10.126775   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:10.154380   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:10.156565   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:10.595836   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:10.625685   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:10.654838   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:10.657544   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:11.093858   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:11.125963   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:11.155451   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:11.157776   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:11.329080   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:11.594338   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:11.625913   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:11.655531   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:11.657757   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:12.094680   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:12.125074   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:12.156504   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:12.157527   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:12.594657   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:12.625349   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:12.654353   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:12.656983   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:13.094718   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:13.125151   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:13.154331   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:13.156598   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:13.595126   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:13.626873   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:13.654512   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:13.656740   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:13.828160   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:14.094559   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:14.126019   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:14.155228   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:14.158042   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:14.596006   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:14.626608   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:14.656951   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:14.659254   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:15.094812   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:15.125914   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:15.155459   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:15.157532   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:15.595411   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:15.625118   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:15.654905   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:15.656932   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:15.833089   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:16.095434   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:16.125283   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:16.155066   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:16.156964   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:16.594257   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:16.625899   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:16.655321   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:16.658052   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:17.097404   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:17.124748   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:17.155670   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:17.158403   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:17.594954   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:17.625453   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:17.654592   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:17.656593   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:18.095211   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:18.126118   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:18.155697   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:18.156856   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:18.328637   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:18.595104   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:18.625985   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:18.655062   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:18.657082   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:19.094569   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:19.125822   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:19.155202   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:19.157964   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:19.594797   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:19.625854   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:19.655328   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:19.657943   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:20.095529   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:20.125903   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:20.155547   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:20.157641   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:20.329359   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:20.855221   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:20.858381   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:20.859843   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:20.860540   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:21.094959   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:21.129150   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:21.161797   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:21.162220   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:21.594694   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:21.625635   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:21.655280   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:21.657315   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:22.094660   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:22.125891   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:22.473066   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:22.473715   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:22.476586   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:22.595128   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:22.625652   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:22.654993   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:22.658298   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:23.093886   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:23.126139   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:23.156079   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:23.158250   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:23.594455   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:23.625689   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:23.654673   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:23.657362   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:24.095220   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:24.197203   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:24.197523   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:24.197678   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:24.602569   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:24.625733   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:24.654778   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:24.656915   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:24.829081   18249 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:24.829106   18249 pod_ready.go:82] duration metric: took 22.50665926s for pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:24.829114   18249 pod_ready.go:39] duration metric: took 30.042940712s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 16:53:24.829128   18249 api_server.go:52] waiting for apiserver process to appear ...
	I0828 16:53:24.829180   18249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:53:24.846345   18249 api_server.go:72] duration metric: took 32.982988344s to wait for apiserver process to appear ...
	I0828 16:53:24.846376   18249 api_server.go:88] waiting for apiserver healthz status ...
	I0828 16:53:24.846397   18249 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0828 16:53:24.852123   18249 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0828 16:53:24.853689   18249 api_server.go:141] control plane version: v1.31.0
	I0828 16:53:24.853713   18249 api_server.go:131] duration metric: took 7.33084ms to wait for apiserver health ...
	I0828 16:53:24.853721   18249 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 16:53:24.862271   18249 system_pods.go:59] 18 kube-system pods found
	I0828 16:53:24.862300   18249 system_pods.go:61] "coredns-6f6b679f8f-8gjc6" [2d62cafa-b292-4c9e-bd8c-b7cc0523f58d] Running
	I0828 16:53:24.862310   18249 system_pods.go:61] "csi-hostpath-attacher-0" [f3ce9e2b-eab0-43a4-a31d-ce0831b5f168] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 16:53:24.862319   18249 system_pods.go:61] "csi-hostpath-resizer-0" [10b5d1e7-194f-42db-8780-63891a0a8ce0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 16:53:24.862329   18249 system_pods.go:61] "csi-hostpathplugin-mm9lp" [011d90e2-d937-44ec-9158-ea2c1f17b104] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 16:53:24.862334   18249 system_pods.go:61] "etcd-addons-990097" [fe186cf5-5965-4644-bc89-139f3599c0a7] Running
	I0828 16:53:24.862340   18249 system_pods.go:61] "kube-apiserver-addons-990097" [aeab6d72-59c7-47c8-acde-ebe584ab2c71] Running
	I0828 16:53:24.862346   18249 system_pods.go:61] "kube-controller-manager-addons-990097" [b1e65ab0-d778-4964-a2f1-610e4457ec7f] Running
	I0828 16:53:24.862351   18249 system_pods.go:61] "kube-ingress-dns-minikube" [3020f9b2-3535-4950-b84f-5387dcc8f455] Running
	I0828 16:53:24.862357   18249 system_pods.go:61] "kube-proxy-8qj9l" [871ff895-ba0c-47f6-aac2-55e5234d02ac] Running
	I0828 16:53:24.862364   18249 system_pods.go:61] "kube-scheduler-addons-990097" [652d01ae-78cd-4eca-99e1-b0de19bd8b88] Running
	I0828 16:53:24.862376   18249 system_pods.go:61] "metrics-server-84c5f94fbc-s6z6n" [3af617c1-2322-4d0f-af32-35d80eaeaf8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 16:53:24.862382   18249 system_pods.go:61] "nvidia-device-plugin-daemonset-j24tf" [fda32bb5-afc7-4b0f-939f-fe0614025dc2] Running
	I0828 16:53:24.862394   18249 system_pods.go:61] "registry-6fb4cdfc84-95krj" [28ff509c-2b4f-4dbc-ac62-07fa93fce1c0] Running
	I0828 16:53:24.862404   18249 system_pods.go:61] "registry-proxy-ds4qv" [1ab53ee3-0865-49b3-8fd0-7f176587e4d5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 16:53:24.862414   18249 system_pods.go:61] "snapshot-controller-56fcc65765-vzbnc" [0c48e398-eb8d-470d-a253-66ea5ad29759] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.862426   18249 system_pods.go:61] "snapshot-controller-56fcc65765-xbr5f" [f0579b92-dea0-4457-9375-d36a3227a888] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.862432   18249 system_pods.go:61] "storage-provisioner" [21f51c68-9237-4afc-950e-961d7a9d6cf2] Running
	I0828 16:53:24.862438   18249 system_pods.go:61] "tiller-deploy-b48cc5f79-wr7ks" [92061fbc-b8a2-4b6f-9ffa-c0ac60e817ab] Running
	I0828 16:53:24.862447   18249 system_pods.go:74] duration metric: took 8.718746ms to wait for pod list to return data ...
	I0828 16:53:24.862458   18249 default_sa.go:34] waiting for default service account to be created ...
	I0828 16:53:24.864930   18249 default_sa.go:45] found service account: "default"
	I0828 16:53:24.864948   18249 default_sa.go:55] duration metric: took 2.483987ms for default service account to be created ...
	I0828 16:53:24.864954   18249 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 16:53:24.873151   18249 system_pods.go:86] 18 kube-system pods found
	I0828 16:53:24.873179   18249 system_pods.go:89] "coredns-6f6b679f8f-8gjc6" [2d62cafa-b292-4c9e-bd8c-b7cc0523f58d] Running
	I0828 16:53:24.873192   18249 system_pods.go:89] "csi-hostpath-attacher-0" [f3ce9e2b-eab0-43a4-a31d-ce0831b5f168] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 16:53:24.873200   18249 system_pods.go:89] "csi-hostpath-resizer-0" [10b5d1e7-194f-42db-8780-63891a0a8ce0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 16:53:24.873209   18249 system_pods.go:89] "csi-hostpathplugin-mm9lp" [011d90e2-d937-44ec-9158-ea2c1f17b104] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 16:53:24.873217   18249 system_pods.go:89] "etcd-addons-990097" [fe186cf5-5965-4644-bc89-139f3599c0a7] Running
	I0828 16:53:24.873223   18249 system_pods.go:89] "kube-apiserver-addons-990097" [aeab6d72-59c7-47c8-acde-ebe584ab2c71] Running
	I0828 16:53:24.873230   18249 system_pods.go:89] "kube-controller-manager-addons-990097" [b1e65ab0-d778-4964-a2f1-610e4457ec7f] Running
	I0828 16:53:24.873239   18249 system_pods.go:89] "kube-ingress-dns-minikube" [3020f9b2-3535-4950-b84f-5387dcc8f455] Running
	I0828 16:53:24.873246   18249 system_pods.go:89] "kube-proxy-8qj9l" [871ff895-ba0c-47f6-aac2-55e5234d02ac] Running
	I0828 16:53:24.873252   18249 system_pods.go:89] "kube-scheduler-addons-990097" [652d01ae-78cd-4eca-99e1-b0de19bd8b88] Running
	I0828 16:53:24.873261   18249 system_pods.go:89] "metrics-server-84c5f94fbc-s6z6n" [3af617c1-2322-4d0f-af32-35d80eaeaf8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 16:53:24.873267   18249 system_pods.go:89] "nvidia-device-plugin-daemonset-j24tf" [fda32bb5-afc7-4b0f-939f-fe0614025dc2] Running
	I0828 16:53:24.873275   18249 system_pods.go:89] "registry-6fb4cdfc84-95krj" [28ff509c-2b4f-4dbc-ac62-07fa93fce1c0] Running
	I0828 16:53:24.873283   18249 system_pods.go:89] "registry-proxy-ds4qv" [1ab53ee3-0865-49b3-8fd0-7f176587e4d5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 16:53:24.873293   18249 system_pods.go:89] "snapshot-controller-56fcc65765-vzbnc" [0c48e398-eb8d-470d-a253-66ea5ad29759] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.873305   18249 system_pods.go:89] "snapshot-controller-56fcc65765-xbr5f" [f0579b92-dea0-4457-9375-d36a3227a888] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.873311   18249 system_pods.go:89] "storage-provisioner" [21f51c68-9237-4afc-950e-961d7a9d6cf2] Running
	I0828 16:53:24.873319   18249 system_pods.go:89] "tiller-deploy-b48cc5f79-wr7ks" [92061fbc-b8a2-4b6f-9ffa-c0ac60e817ab] Running
	I0828 16:53:24.873330   18249 system_pods.go:126] duration metric: took 8.36895ms to wait for k8s-apps to be running ...
	I0828 16:53:24.873342   18249 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 16:53:24.873397   18249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 16:53:24.891586   18249 system_svc.go:56] duration metric: took 18.235397ms WaitForService to wait for kubelet
	I0828 16:53:24.891614   18249 kubeadm.go:582] duration metric: took 33.028263807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 16:53:24.891635   18249 node_conditions.go:102] verifying NodePressure condition ...
	I0828 16:53:24.895227   18249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 16:53:24.895250   18249 node_conditions.go:123] node cpu capacity is 2
	I0828 16:53:24.895261   18249 node_conditions.go:105] duration metric: took 3.620897ms to run NodePressure ...
	I0828 16:53:24.895272   18249 start.go:241] waiting for startup goroutines ...
	I0828 16:53:25.094459   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:25.125633   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:25.155792   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:25.157753   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:25.595906   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:25.625747   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:25.655075   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:25.658011   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:26.094834   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:26.129755   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:26.155136   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:26.157330   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:26.593981   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:26.625973   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:26.664009   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:26.664214   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:27.095448   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:27.125667   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:27.154619   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:27.157410   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:27.595673   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:27.625374   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:27.655905   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:27.657898   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:28.094619   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:28.128498   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:28.154730   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:28.156969   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:28.595931   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:28.625670   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:28.655499   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:28.659580   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:29.094542   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:29.125191   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:29.154692   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:29.156836   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:29.594830   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:29.625397   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:29.655016   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:29.658369   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:30.095041   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:30.125951   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:30.197156   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:30.197430   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:30.593884   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:30.626012   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:30.655288   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:30.658497   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:31.094267   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:31.126053   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:31.155845   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:31.157620   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:31.595111   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:31.625862   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:31.659323   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:31.659393   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:32.095279   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:32.125599   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:32.199254   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:32.199409   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:32.594421   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:32.625606   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:32.655429   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:32.657475   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:33.094915   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:33.125310   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:33.154609   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:33.156659   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:33.594492   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:33.625457   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:33.654434   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:33.656859   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:34.094787   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:34.126012   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:34.155559   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:34.158068   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:34.606896   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:34.625733   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:34.655451   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:34.658409   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:35.094387   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:35.125741   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:35.155049   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:35.156962   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:35.595142   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:35.626314   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:35.656424   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:35.658188   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:36.094587   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:36.125299   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:36.157566   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:36.162381   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:36.594757   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:36.625338   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:36.654928   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:36.657667   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:37.095534   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:37.125174   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:37.154440   18249 kapi.go:107] duration metric: took 37.003171679s to wait for kubernetes.io/minikube-addons=registry ...
	I0828 16:53:37.156447   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:37.594798   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:37.625235   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:37.656908   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:38.095661   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:38.126092   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:38.158261   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:38.595348   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:38.625091   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:38.657913   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:39.094636   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:39.126184   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:39.157665   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:39.594133   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:39.625606   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:39.658035   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:40.095449   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:40.125725   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:40.157599   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:40.594861   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:40.625830   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:40.657531   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:41.095211   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:41.124902   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:41.158798   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:41.594588   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:41.625002   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:41.657786   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:42.095776   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:42.127039   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:42.158485   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:42.645960   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:42.647890   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:42.657722   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:43.095058   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:43.127772   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:43.157380   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:43.595802   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:43.626208   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:43.659191   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:44.095784   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:44.125689   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:44.157160   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:44.594967   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:44.625614   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:44.657657   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:45.098165   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:45.125532   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:45.157027   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:45.595371   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:45.626505   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:45.658717   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:46.094137   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:46.125930   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:46.159054   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:46.597552   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:46.625716   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:46.657534   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:47.095137   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:47.125905   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:47.158224   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:47.636222   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:47.637581   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:47.657044   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:48.094826   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:48.125355   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:48.157656   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:48.594813   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:48.631137   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:48.657624   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:49.095053   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:49.128446   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:49.157355   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:49.595223   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:49.626255   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:49.658186   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:50.095856   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:50.127379   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:50.158702   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:50.595643   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:50.698127   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:50.698171   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:51.094801   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:51.125567   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:51.157384   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:51.595613   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:51.627271   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:51.657145   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:52.101226   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:52.125053   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:52.157436   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:52.593985   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:52.625898   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:52.658285   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:53.095068   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:53.126152   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:53.157104   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:53.594124   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:53.626149   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:53.657735   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:54.099081   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:54.126193   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:54.157152   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:54.595009   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:54.626412   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:54.720927   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:55.094671   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:55.125251   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:55.156958   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:55.596323   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:55.624970   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:55.657746   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:56.094441   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:56.125622   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:56.156601   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:56.595765   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:56.630056   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:56.698961   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:57.094616   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:57.125818   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:57.157863   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:57.594274   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:57.624777   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:57.657816   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:58.096341   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:58.126916   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:58.158947   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:58.595441   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:58.625428   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:58.657100   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:59.095929   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:59.125671   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:59.157343   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:59.594697   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:59.625751   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:59.657338   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:00.095059   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:00.125731   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:00.157953   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:00.595257   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:00.627464   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:00.657563   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:01.094667   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:01.125904   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:01.157762   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:01.594499   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:01.624717   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:01.657505   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:02.094567   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:02.125907   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:02.196935   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:02.595038   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:02.625765   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:02.696647   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:03.094272   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:03.125427   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:03.157639   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:03.594871   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:03.625673   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:03.657841   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:04.094887   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:04.126789   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:04.157551   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:04.595035   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:04.627362   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:04.658298   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:05.095367   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:05.197028   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:05.197341   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:05.594590   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:05.625380   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:05.657085   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:06.095202   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:06.126191   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:06.156969   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:06.596094   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:06.625814   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:06.658641   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:07.100240   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:07.131987   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:07.158146   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:07.595588   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:07.625705   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:07.657218   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:08.141202   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:08.141936   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:08.170688   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:08.595335   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:08.625506   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:08.657914   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:09.097081   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:09.126472   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:09.157818   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:09.595778   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:09.625507   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:09.658020   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:10.095683   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:10.125569   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:10.157674   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:10.595427   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:10.626371   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:10.657765   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:11.094606   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:11.130408   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:11.158323   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:11.595040   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:11.626209   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:11.658014   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:12.095395   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:12.125926   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:12.157848   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:12.594680   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:12.625860   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:12.657412   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:13.094853   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:13.196216   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:13.196765   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:13.600021   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:13.626826   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:13.657927   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:14.095522   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:14.125684   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:14.157491   18249 kapi.go:107] duration metric: took 1m14.004214208s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0828 16:54:14.594716   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:14.625548   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:15.094682   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:15.125350   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:15.596546   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:15.625572   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:16.094125   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:16.125975   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:16.594260   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:16.625018   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:17.094891   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:17.125763   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:17.594205   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:17.626413   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:18.094280   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:18.125555   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:18.598192   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:18.627321   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:19.095258   18249 kapi.go:107] duration metric: took 1m16.504298837s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0828 16:54:19.097233   18249 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-990097 cluster.
	I0828 16:54:19.098992   18249 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0828 16:54:19.100337   18249 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0828 16:54:19.132159   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:19.626709   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:20.125928   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:20.626509   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:21.126771   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:21.625546   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:22.126321   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:22.626308   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:23.128207   18249 kapi.go:107] duration metric: took 1m22.507265973s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0828 16:54:23.129806   18249 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, inspektor-gadget, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0828 16:54:23.131012   18249 addons.go:510] duration metric: took 1m31.267643413s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server inspektor-gadget helm-tiller yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0828 16:54:23.131051   18249 start.go:246] waiting for cluster config update ...
	I0828 16:54:23.131069   18249 start.go:255] writing updated cluster config ...
	I0828 16:54:23.131315   18249 ssh_runner.go:195] Run: rm -f paused
	I0828 16:54:23.182950   18249 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 16:54:23.184758   18249 out.go:177] * Done! kubectl is now configured to use "addons-990097" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.607859228Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864725607835129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e2493f3-5e6a-402d-bf36-2f77a0f6617d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.608479871Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a8a35de-b2f3-4467-842b-4bc03debf0bc name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.608620799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a8a35de-b2f3-4467-842b-4bc03debf0bc name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.609472544Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:073ea5ad9ed0cd6351fb9bc27bf3fc673216f1716f19924a1166408bfa7e913f,PodSandboxId:f467ab0b0144ada3d83a567ade3c603dd3d817d2f2e0469e5ef5467fde03d5f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724864718604796484,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4ksfc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13e41307b5658457c95881ad7bc385756c8a6e0c884dd77ced9e7662188df0,PodSandboxId:653bf553c3fe1b26f7c07d71bceb65bd5a8f866d866aa0561ac6e8ffe31a773e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036713531958,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-h8rvs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d937ff9-473a-4187-a50e-7cf052b30dc4,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515908142ae573504e393dfa4953480d861082c2862d4cc4db879d360029ae2c,PodSandboxId:e2e20a8025ba14e11434fa68c4f14158ccb3c89d05c3cb29424b2d0765ca5278,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036563996787,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqzdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d58556bc-d999-4b9b-91f6-93b53d5b8d2c,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:172
4864032785543121,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb
19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec
4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadat
a{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Att
empt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},
Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a8a35de-b2f3-4467-842b-4bc03debf0bc name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.646667604Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18be00b2-b835-43ff-883e-5e2a4963b313 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.646758477Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18be00b2-b835-43ff-883e-5e2a4963b313 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.647632693Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=deca96d2-a7aa-45e1-aebc-52c07f64c82b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.649097407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864725649069626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=deca96d2-a7aa-45e1-aebc-52c07f64c82b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.649683663Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03b35b46-95bb-469a-ab35-b4fea58cf889 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.649737568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03b35b46-95bb-469a-ab35-b4fea58cf889 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.650041115Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:073ea5ad9ed0cd6351fb9bc27bf3fc673216f1716f19924a1166408bfa7e913f,PodSandboxId:f467ab0b0144ada3d83a567ade3c603dd3d817d2f2e0469e5ef5467fde03d5f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724864718604796484,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4ksfc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13e41307b5658457c95881ad7bc385756c8a6e0c884dd77ced9e7662188df0,PodSandboxId:653bf553c3fe1b26f7c07d71bceb65bd5a8f866d866aa0561ac6e8ffe31a773e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036713531958,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-h8rvs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d937ff9-473a-4187-a50e-7cf052b30dc4,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515908142ae573504e393dfa4953480d861082c2862d4cc4db879d360029ae2c,PodSandboxId:e2e20a8025ba14e11434fa68c4f14158ccb3c89d05c3cb29424b2d0765ca5278,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036563996787,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqzdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d58556bc-d999-4b9b-91f6-93b53d5b8d2c,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:172
4864032785543121,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb
19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec
4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadat
a{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Att
empt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},
Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03b35b46-95bb-469a-ab35-b4fea58cf889 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.681080013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2584abea-381a-464a-b5b4-ea1542ecfd30 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.681152434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2584abea-381a-464a-b5b4-ea1542ecfd30 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.682595018Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9c58d71-8310-45a3-827d-6b8e7f813748 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.683831037Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864725683801666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9c58d71-8310-45a3-827d-6b8e7f813748 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.684485516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82c7cf8e-4efc-4574-96bd-1213c925cb3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.684550883Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82c7cf8e-4efc-4574-96bd-1213c925cb3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.684854683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:073ea5ad9ed0cd6351fb9bc27bf3fc673216f1716f19924a1166408bfa7e913f,PodSandboxId:f467ab0b0144ada3d83a567ade3c603dd3d817d2f2e0469e5ef5467fde03d5f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724864718604796484,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4ksfc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13e41307b5658457c95881ad7bc385756c8a6e0c884dd77ced9e7662188df0,PodSandboxId:653bf553c3fe1b26f7c07d71bceb65bd5a8f866d866aa0561ac6e8ffe31a773e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036713531958,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-h8rvs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d937ff9-473a-4187-a50e-7cf052b30dc4,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515908142ae573504e393dfa4953480d861082c2862d4cc4db879d360029ae2c,PodSandboxId:e2e20a8025ba14e11434fa68c4f14158ccb3c89d05c3cb29424b2d0765ca5278,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036563996787,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqzdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d58556bc-d999-4b9b-91f6-93b53d5b8d2c,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:172
4864032785543121,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb
19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec
4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadat
a{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Att
empt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},
Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=82c7cf8e-4efc-4574-96bd-1213c925cb3b name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.726144286Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=975ba2af-c110-4525-9a60-d01345e37b3c name=/runtime.v1.RuntimeService/Version
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.726233936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=975ba2af-c110-4525-9a60-d01345e37b3c name=/runtime.v1.RuntimeService/Version
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.727185139Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=55e44b3c-9e2f-4589-946c-f78a61f02a4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.728403508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864725728378323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=55e44b3c-9e2f-4589-946c-f78a61f02a4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.729140673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8de0d6cd-b072-4983-b3b1-62650dca7519 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.729196724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8de0d6cd-b072-4983-b3b1-62650dca7519 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:05:25 addons-990097 crio[658]: time="2024-08-28 17:05:25.729569109Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:073ea5ad9ed0cd6351fb9bc27bf3fc673216f1716f19924a1166408bfa7e913f,PodSandboxId:f467ab0b0144ada3d83a567ade3c603dd3d817d2f2e0469e5ef5467fde03d5f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724864718604796484,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4ksfc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb13e41307b5658457c95881ad7bc385756c8a6e0c884dd77ced9e7662188df0,PodSandboxId:653bf553c3fe1b26f7c07d71bceb65bd5a8f866d866aa0561ac6e8ffe31a773e,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036713531958,Labels:map[string]string{io.kubernetes.container.name: patch,io.
kubernetes.pod.name: ingress-nginx-admission-patch-h8rvs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d937ff9-473a-4187-a50e-7cf052b30dc4,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:515908142ae573504e393dfa4953480d861082c2862d4cc4db879d360029ae2c,PodSandboxId:e2e20a8025ba14e11434fa68c4f14158ccb3c89d05c3cb29424b2d0765ca5278,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1724864036563996787,Labels:map[string]string{io.
kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dqzdf,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d58556bc-d999-4b9b-91f6-93b53d5b8d2c,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:172
4864032785543121,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb
19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string
]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedI
mage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec
4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadat
a{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Att
empt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},
Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8de0d6cd-b072-4983-b3b1-62650dca7519 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	073ea5ad9ed0c       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        7 seconds ago       Running             hello-world-app           0                   f467ab0b0144a       hello-world-app-55bf9c44b4-4ksfc
	7d927dbf90a83       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                              2 minutes ago       Running             nginx                     0                   4d68d074fbaf6       nginx
	c026a720fa74e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 11 minutes ago      Running             gcp-auth                  0                   736ed095eb5c9       gcp-auth-89d5ffd79-hhsh7
	cb13e41307b56       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              patch                     0                   653bf553c3fe1       ingress-nginx-admission-patch-h8rvs
	515908142ae57       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   e2e20a8025ba1       ingress-nginx-admission-create-dqzdf
	5bd52d706a171       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             11 minutes ago      Running             local-path-provisioner    0                   7bde8dc056090       local-path-provisioner-86d989889c-fs8wf
	9760e94848e1a       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        11 minutes ago      Running             metrics-server            0                   c79990266e87d       metrics-server-84c5f94fbc-s6z6n
	092298cdfb616       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   c61ef1e53e51b       storage-provisioner
	04f71727199d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             12 minutes ago      Running             coredns                   0                   50cdf2ec92991       coredns-6f6b679f8f-8gjc6
	f41de974958b8       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             12 minutes ago      Running             kube-proxy                0                   37e7fe6fa66b5       kube-proxy-8qj9l
	7c59931085105       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             12 minutes ago      Running             kube-scheduler            0                   f5c9bab6fb293       kube-scheduler-addons-990097
	e7f9f99f0e0ad       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             12 minutes ago      Running             kube-apiserver            0                   fcc77a679af87       kube-apiserver-addons-990097
	b8d25fadc3e3b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             12 minutes ago      Running             kube-controller-manager   0                   2ff31a06164b2       kube-controller-manager-addons-990097
	f5afe4e2c7c30       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago      Running             etcd                      0                   3e4bbd88d6334       etcd-addons-990097
	
	
	==> coredns [04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b] <==
	[INFO] 127.0.0.1:43936 - 36274 "HINFO IN 1185575041321747915.1095525017323975341. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010495017s
	[INFO] 10.244.0.7:36545 - 56598 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00046725s
	[INFO] 10.244.0.7:36545 - 34323 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122837s
	[INFO] 10.244.0.7:40812 - 34220 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000159278s
	[INFO] 10.244.0.7:40812 - 30894 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094014s
	[INFO] 10.244.0.7:51634 - 55543 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000178288s
	[INFO] 10.244.0.7:51634 - 16073 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087886s
	[INFO] 10.244.0.7:58682 - 5261 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000220947s
	[INFO] 10.244.0.7:58682 - 20879 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00015173s
	[INFO] 10.244.0.7:34574 - 59863 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142024s
	[INFO] 10.244.0.7:34574 - 27092 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000153861s
	[INFO] 10.244.0.7:47702 - 54016 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074647s
	[INFO] 10.244.0.7:47702 - 51998 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067543s
	[INFO] 10.244.0.7:41963 - 59886 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068451s
	[INFO] 10.244.0.7:41963 - 56300 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027312s
	[INFO] 10.244.0.7:43940 - 2554 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010857s
	[INFO] 10.244.0.7:43940 - 48379 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072487s
	[INFO] 10.244.0.22:56224 - 47882 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000420049s
	[INFO] 10.244.0.22:50407 - 64319 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000234351s
	[INFO] 10.244.0.22:57980 - 2289 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127832s
	[INFO] 10.244.0.22:51961 - 33598 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000075597s
	[INFO] 10.244.0.22:37745 - 53825 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120959s
	[INFO] 10.244.0.22:46423 - 60876 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059568s
	[INFO] 10.244.0.22:56705 - 36016 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000732573s
	[INFO] 10.244.0.22:55859 - 40874 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001065258s
	
	
	==> describe nodes <==
	Name:               addons-990097
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-990097
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=addons-990097
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T16_52_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-990097
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 16:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-990097
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:05:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:04:19 +0000   Wed, 28 Aug 2024 16:52:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:04:19 +0000   Wed, 28 Aug 2024 16:52:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:04:19 +0000   Wed, 28 Aug 2024 16:52:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:04:19 +0000   Wed, 28 Aug 2024 16:52:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-990097
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6fc997bea7fd463bb1b99884632d7f13
	  System UUID:                6fc997be-a7fd-463b-b1b9-9884632d7f13
	  Boot ID:                    c2f58d05-673b-4f75-ad50-a0fe6c092504
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-4ksfc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	  gcp-auth                    gcp-auth-89d5ffd79-hhsh7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-8gjc6                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-990097                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-990097               250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-990097      200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-8qj9l                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-990097               100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-s6z6n            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-fs8wf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-990097 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-990097 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-990097 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-990097 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-990097 event: Registered Node addons-990097 in Controller
	
	
	==> dmesg <==
	[ +27.839945] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.867762] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.427541] kauditd_printk_skb: 12 callbacks suppressed
	[Aug28 16:54] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.057528] kauditd_printk_skb: 97 callbacks suppressed
	[ +11.649408] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.118173] kauditd_printk_skb: 45 callbacks suppressed
	[ +23.135978] kauditd_printk_skb: 6 callbacks suppressed
	[Aug28 16:55] kauditd_printk_skb: 30 callbacks suppressed
	[Aug28 16:56] kauditd_printk_skb: 28 callbacks suppressed
	[Aug28 16:59] kauditd_printk_skb: 28 callbacks suppressed
	[Aug28 17:02] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.029063] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.016574] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.965420] kauditd_printk_skb: 11 callbacks suppressed
	[Aug28 17:03] kauditd_printk_skb: 10 callbacks suppressed
	[ +15.023131] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.275073] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.034230] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.067261] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.756849] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.874514] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.278471] kauditd_printk_skb: 25 callbacks suppressed
	[Aug28 17:05] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.002472] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca] <==
	{"level":"warn","ts":"2024-08-28T16:53:22.461042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"314.018812ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:53:22.461070Z","caller":"traceutil/trace.go:171","msg":"trace[1208754861] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:898; }","duration":"314.048115ms","start":"2024-08-28T16:53:22.147017Z","end":"2024-08-28T16:53:22.461065Z","steps":["trace[1208754861] 'range keys from in-memory index tree'  (duration: 313.969862ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:53:22.461087Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T16:53:22.146944Z","time spent":"314.138242ms","remote":"127.0.0.1:52266","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-28T16:53:34.593913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.743715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-89d5ffd79-hhsh7.17eff2a74e25dc97\" ","response":"range_response_count:1 size:781"}
	{"level":"info","ts":"2024-08-28T16:53:34.593958Z","caller":"traceutil/trace.go:171","msg":"trace[1783725651] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-89d5ffd79-hhsh7.17eff2a74e25dc97; range_end:; response_count:1; response_revision:925; }","duration":"207.796464ms","start":"2024-08-28T16:53:34.386149Z","end":"2024-08-28T16:53:34.593946Z","steps":["trace[1783725651] 'range keys from in-memory index tree'  (duration: 207.618074ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:53:42.558909Z","caller":"traceutil/trace.go:171","msg":"trace[1823174305] transaction","detail":"{read_only:false; response_revision:947; number_of_response:1; }","duration":"287.407068ms","start":"2024-08-28T16:53:42.271483Z","end":"2024-08-28T16:53:42.558890Z","steps":["trace[1823174305] 'process raft request'  (duration: 287.285479ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:53:47.622998Z","caller":"traceutil/trace.go:171","msg":"trace[932674930] transaction","detail":"{read_only:false; response_revision:961; number_of_response:1; }","duration":"104.356711ms","start":"2024-08-28T16:53:47.518628Z","end":"2024-08-28T16:53:47.622985Z","steps":["trace[932674930] 'process raft request'  (duration: 104.239303ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:54:55.302592Z","caller":"traceutil/trace.go:171","msg":"trace[1524241447] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"230.314513ms","start":"2024-08-28T16:54:55.072243Z","end":"2024-08-28T16:54:55.302557Z","steps":["trace[1524241447] 'process raft request'  (duration: 229.745464ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:54:55.303259Z","caller":"traceutil/trace.go:171","msg":"trace[1962350705] linearizableReadLoop","detail":"{readStateIndex:1311; appliedIndex:1310; }","duration":"198.610451ms","start":"2024-08-28T16:54:55.103505Z","end":"2024-08-28T16:54:55.302115Z","steps":["trace[1962350705] 'read index received'  (duration: 198.397171ms)","trace[1962350705] 'applied index is now lower than readState.Index'  (duration: 212.527µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T16:54:55.303540Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.965293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-08-28T16:54:55.303613Z","caller":"traceutil/trace.go:171","msg":"trace[178375171] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1269; }","duration":"200.115796ms","start":"2024-08-28T16:54:55.103483Z","end":"2024-08-28T16:54:55.303599Z","steps":["trace[178375171] 'agreement among raft nodes before linearized reading'  (duration: 199.893413ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:02:42.414396Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1528}
	{"level":"info","ts":"2024-08-28T17:02:42.450275Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1528,"took":"35.104221ms","hash":1413996905,"current-db-size-bytes":6000640,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3461120,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-08-28T17:02:42.450436Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1413996905,"revision":1528,"compact-revision":-1}
	{"level":"info","ts":"2024-08-28T17:02:53.325278Z","caller":"traceutil/trace.go:171","msg":"trace[2108371229] linearizableReadLoop","detail":"{readStateIndex:2211; appliedIndex:2210; }","duration":"459.294326ms","start":"2024-08-28T17:02:52.865949Z","end":"2024-08-28T17:02:53.325243Z","steps":["trace[2108371229] 'read index received'  (duration: 459.150699ms)","trace[2108371229] 'applied index is now lower than readState.Index'  (duration: 142.943µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-28T17:02:53.325511Z","caller":"traceutil/trace.go:171","msg":"trace[424925906] transaction","detail":"{read_only:false; response_revision:2063; number_of_response:1; }","duration":"525.181818ms","start":"2024-08-28T17:02:52.800315Z","end":"2024-08-28T17:02:53.325497Z","steps":["trace[424925906] 'process raft request'  (duration: 524.829974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:02:53.325765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.733213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-08-28T17:02:53.325825Z","caller":"traceutil/trace.go:171","msg":"trace[162657861] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:2063; }","duration":"368.829874ms","start":"2024-08-28T17:02:52.956983Z","end":"2024-08-28T17:02:53.325812Z","steps":["trace[162657861] 'agreement among raft nodes before linearized reading'  (duration: 368.661415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:02:53.325863Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T17:02:52.956950Z","time spent":"368.907244ms","remote":"127.0.0.1:52368","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":577,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"warn","ts":"2024-08-28T17:02:53.326000Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"460.0423ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T17:02:53.326031Z","caller":"traceutil/trace.go:171","msg":"trace[1263793325] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2063; }","duration":"460.081559ms","start":"2024-08-28T17:02:52.865944Z","end":"2024-08-28T17:02:53.326026Z","steps":["trace[1263793325] 'agreement among raft nodes before linearized reading'  (duration: 460.033068ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:02:53.327962Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T17:02:52.800270Z","time spent":"525.3055ms","remote":"127.0.0.1:52368","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":485,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2016 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-08-28T17:03:45.831964Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T17:03:45.434224Z","time spent":"397.729251ms","remote":"127.0.0.1:52116","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-08-28T17:04:18.231086Z","caller":"traceutil/trace.go:171","msg":"trace[2114474411] transaction","detail":"{read_only:false; response_revision:2520; number_of_response:1; }","duration":"163.39208ms","start":"2024-08-28T17:04:18.067647Z","end":"2024-08-28T17:04:18.231039Z","steps":["trace[2114474411] 'process raft request'  (duration: 163.273021ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:04:28.391969Z","caller":"traceutil/trace.go:171","msg":"trace[768217437] transaction","detail":"{read_only:false; response_revision:2530; number_of_response:1; }","duration":"114.240171ms","start":"2024-08-28T17:04:28.277712Z","end":"2024-08-28T17:04:28.391952Z","steps":["trace[768217437] 'process raft request'  (duration: 114.119686ms)"],"step_count":1}
	
	
	==> gcp-auth [c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4] <==
	2024/08/28 16:54:23 Ready to write response ...
	2024/08/28 17:02:37 Ready to marshal response ...
	2024/08/28 17:02:37 Ready to write response ...
	2024/08/28 17:02:46 Ready to marshal response ...
	2024/08/28 17:02:46 Ready to write response ...
	2024/08/28 17:02:50 Ready to marshal response ...
	2024/08/28 17:02:50 Ready to write response ...
	2024/08/28 17:03:06 Ready to marshal response ...
	2024/08/28 17:03:06 Ready to write response ...
	2024/08/28 17:03:23 Ready to marshal response ...
	2024/08/28 17:03:23 Ready to write response ...
	2024/08/28 17:03:23 Ready to marshal response ...
	2024/08/28 17:03:23 Ready to write response ...
	2024/08/28 17:03:33 Ready to marshal response ...
	2024/08/28 17:03:33 Ready to write response ...
	2024/08/28 17:03:41 Ready to marshal response ...
	2024/08/28 17:03:41 Ready to write response ...
	2024/08/28 17:03:41 Ready to marshal response ...
	2024/08/28 17:03:41 Ready to write response ...
	2024/08/28 17:03:41 Ready to marshal response ...
	2024/08/28 17:03:41 Ready to write response ...
	2024/08/28 17:03:52 Ready to marshal response ...
	2024/08/28 17:03:52 Ready to write response ...
	2024/08/28 17:05:15 Ready to marshal response ...
	2024/08/28 17:05:15 Ready to write response ...
	
	
	==> kernel <==
	 17:05:26 up 13 min,  0 users,  load average: 0.34, 0.51, 0.42
	Linux addons-990097 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83] <==
	 > logger="UnhandledError"
	E0828 16:54:47.903698       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.64.33:443: connect: connection refused" logger="UnhandledError"
	E0828 16:54:47.909556       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.64.33:443: connect: connection refused" logger="UnhandledError"
	I0828 16:54:47.980457       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0828 17:02:44.290693       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0828 17:02:45.318931       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0828 17:02:50.191896       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0828 17:02:50.441092       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.222.232"}
	I0828 17:03:01.316914       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0828 17:03:22.465921       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.465977       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.486678       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.486850       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.594997       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.595115       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.613013       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.614969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.617986       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.618315       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0828 17:03:23.613355       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0828 17:03:23.619379       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0828 17:03:23.735621       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0828 17:03:41.358241       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.113.183"}
	E0828 17:03:56.081411       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.195:8443->10.244.0.32:49968: read: connection reset by peer
	I0828 17:05:15.856419       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.186.179"}
	
	
	==> kube-controller-manager [b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880] <==
	I0828 17:04:04.018531       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	W0828 17:04:05.200662       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:05.200722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:04:19.855558       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-990097"
	W0828 17:04:23.560014       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:23.560072       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:24.993887       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:24.994000       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:41.740828       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:41.740889       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:44.044382       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:44.044448       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:59.115923       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:59.116074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:05:01.647247       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:05:01.647329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:05:15.700197       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="53.153127ms"
	I0828 17:05:15.721089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="20.368899ms"
	I0828 17:05:15.732765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.619578ms"
	I0828 17:05:15.732860       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="43.226µs"
	I0828 17:05:17.750919       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0828 17:05:17.757820       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.342µs"
	I0828 17:05:17.764279       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0828 17:05:18.881853       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.150969ms"
	I0828 17:05:18.882053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="76.83µs"
	
	
	==> kube-proxy [f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 16:52:52.089415       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 16:52:52.099940       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	E0828 16:52:52.099997       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 16:52:52.173377       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 16:52:52.173438       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 16:52:52.173468       1 server_linux.go:169] "Using iptables Proxier"
	I0828 16:52:52.175943       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 16:52:52.176378       1 server.go:483] "Version info" version="v1.31.0"
	I0828 16:52:52.176391       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 16:52:52.177695       1 config.go:197] "Starting service config controller"
	I0828 16:52:52.177716       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 16:52:52.177745       1 config.go:104] "Starting endpoint slice config controller"
	I0828 16:52:52.177750       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 16:52:52.178237       1 config.go:326] "Starting node config controller"
	I0828 16:52:52.178244       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 16:52:52.278337       1 shared_informer.go:320] Caches are synced for node config
	I0828 16:52:52.278370       1 shared_informer.go:320] Caches are synced for service config
	I0828 16:52:52.278391       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093] <==
	W0828 16:52:43.762343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0828 16:52:43.762378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:43.767505       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 16:52:43.767603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.593944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0828 16:52:44.593990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.649260       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 16:52:44.649415       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0828 16:52:44.667387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0828 16:52:44.667479       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.675396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 16:52:44.675487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.740397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0828 16:52:44.740445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.770930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 16:52:44.770991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.825118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0828 16:52:44.825170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.869231       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 16:52:44.869366       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.933958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0828 16:52:44.934034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.988755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 16:52:44.988802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0828 16:52:47.648648       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 17:05:15 addons-990097 kubelet[1192]: I0828 17:05:15.838952    1192 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64-gcp-creds\") pod \"hello-world-app-55bf9c44b4-4ksfc\" (UID: \"a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64\") " pod="default/hello-world-app-55bf9c44b4-4ksfc"
	Aug 28 17:05:15 addons-990097 kubelet[1192]: I0828 17:05:15.839186    1192 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w2sk\" (UniqueName: \"kubernetes.io/projected/a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64-kube-api-access-5w2sk\") pod \"hello-world-app-55bf9c44b4-4ksfc\" (UID: \"a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64\") " pod="default/hello-world-app-55bf9c44b4-4ksfc"
	Aug 28 17:05:16 addons-990097 kubelet[1192]: E0828 17:05:16.703795    1192 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864716703450408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:567301,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:05:16 addons-990097 kubelet[1192]: E0828 17:05:16.704946    1192 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864716703450408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:567301,},InodesUsed:&UInt64Value{Value:195,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:05:16 addons-990097 kubelet[1192]: I0828 17:05:16.836619    1192 scope.go:117] "RemoveContainer" containerID="e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280"
	Aug 28 17:05:16 addons-990097 kubelet[1192]: I0828 17:05:16.848073    1192 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94mzk\" (UniqueName: \"kubernetes.io/projected/3020f9b2-3535-4950-b84f-5387dcc8f455-kube-api-access-94mzk\") pod \"3020f9b2-3535-4950-b84f-5387dcc8f455\" (UID: \"3020f9b2-3535-4950-b84f-5387dcc8f455\") "
	Aug 28 17:05:16 addons-990097 kubelet[1192]: I0828 17:05:16.854426    1192 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3020f9b2-3535-4950-b84f-5387dcc8f455-kube-api-access-94mzk" (OuterVolumeSpecName: "kube-api-access-94mzk") pod "3020f9b2-3535-4950-b84f-5387dcc8f455" (UID: "3020f9b2-3535-4950-b84f-5387dcc8f455"). InnerVolumeSpecName "kube-api-access-94mzk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:05:16 addons-990097 kubelet[1192]: I0828 17:05:16.855890    1192 scope.go:117] "RemoveContainer" containerID="e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280"
	Aug 28 17:05:16 addons-990097 kubelet[1192]: E0828 17:05:16.856537    1192 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280\": container with ID starting with e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280 not found: ID does not exist" containerID="e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280"
	Aug 28 17:05:16 addons-990097 kubelet[1192]: I0828 17:05:16.856573    1192 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280"} err="failed to get container status \"e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280\": rpc error: code = NotFound desc = could not find container \"e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280\": container with ID starting with e9fe58775a0c9398e7e2442105666d7c9cbb94fd3f5761f42ff838097591d280 not found: ID does not exist"
	Aug 28 17:05:16 addons-990097 kubelet[1192]: I0828 17:05:16.949002    1192 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-94mzk\" (UniqueName: \"kubernetes.io/projected/3020f9b2-3535-4950-b84f-5387dcc8f455-kube-api-access-94mzk\") on node \"addons-990097\" DevicePath \"\""
	Aug 28 17:05:18 addons-990097 kubelet[1192]: I0828 17:05:18.409106    1192 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3020f9b2-3535-4950-b84f-5387dcc8f455" path="/var/lib/kubelet/pods/3020f9b2-3535-4950-b84f-5387dcc8f455/volumes"
	Aug 28 17:05:18 addons-990097 kubelet[1192]: I0828 17:05:18.409955    1192 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d937ff9-473a-4187-a50e-7cf052b30dc4" path="/var/lib/kubelet/pods/4d937ff9-473a-4187-a50e-7cf052b30dc4/volumes"
	Aug 28 17:05:18 addons-990097 kubelet[1192]: I0828 17:05:18.410459    1192 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d58556bc-d999-4b9b-91f6-93b53d5b8d2c" path="/var/lib/kubelet/pods/d58556bc-d999-4b9b-91f6-93b53d5b8d2c/volumes"
	Aug 28 17:05:21 addons-990097 kubelet[1192]: I0828 17:05:21.081847    1192 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z2v9\" (UniqueName: \"kubernetes.io/projected/ff0eadf6-676d-45fe-80d5-d11090925146-kube-api-access-9z2v9\") pod \"ff0eadf6-676d-45fe-80d5-d11090925146\" (UID: \"ff0eadf6-676d-45fe-80d5-d11090925146\") "
	Aug 28 17:05:21 addons-990097 kubelet[1192]: I0828 17:05:21.081917    1192 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff0eadf6-676d-45fe-80d5-d11090925146-webhook-cert\") pod \"ff0eadf6-676d-45fe-80d5-d11090925146\" (UID: \"ff0eadf6-676d-45fe-80d5-d11090925146\") "
	Aug 28 17:05:21 addons-990097 kubelet[1192]: I0828 17:05:21.084073    1192 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0eadf6-676d-45fe-80d5-d11090925146-kube-api-access-9z2v9" (OuterVolumeSpecName: "kube-api-access-9z2v9") pod "ff0eadf6-676d-45fe-80d5-d11090925146" (UID: "ff0eadf6-676d-45fe-80d5-d11090925146"). InnerVolumeSpecName "kube-api-access-9z2v9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:05:21 addons-990097 kubelet[1192]: I0828 17:05:21.084567    1192 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff0eadf6-676d-45fe-80d5-d11090925146-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ff0eadf6-676d-45fe-80d5-d11090925146" (UID: "ff0eadf6-676d-45fe-80d5-d11090925146"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 28 17:05:21 addons-990097 kubelet[1192]: I0828 17:05:21.182207    1192 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9z2v9\" (UniqueName: \"kubernetes.io/projected/ff0eadf6-676d-45fe-80d5-d11090925146-kube-api-access-9z2v9\") on node \"addons-990097\" DevicePath \"\""
	Aug 28 17:05:21 addons-990097 kubelet[1192]: I0828 17:05:21.182243    1192 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ff0eadf6-676d-45fe-80d5-d11090925146-webhook-cert\") on node \"addons-990097\" DevicePath \"\""
	Aug 28 17:05:21 addons-990097 kubelet[1192]: E0828 17:05:21.571629    1192 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unauthorized: authentication failed" image="gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Aug 28 17:05:21 addons-990097 kubelet[1192]: E0828 17:05:21.572319    1192 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:busybox,Image:gcr.io/k8s-minikube/busybox:1.28.4-glibc,Command:[sleep 3600],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:GOOGLE_APPLICATION_CREDENTIALS,Value:/google-app-creds.json,ValueFrom:nil,},EnvVar{Name:PROJECT_ID,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCP_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GCLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:GOOGLE_CLOUD_PROJECT,Value:this_is_fake,ValueFrom:nil,},EnvVar{Name:CLOUDSDK_CORE_PROJECT,Value:this_is_fake,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-58r55,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name
:gcp-creds,ReadOnly:true,MountPath:/google-app-creds.json,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod busybox_default(27dca925-9e7b-46e8-b9f4-9b11d07e0de2): ErrImagePull: unable to retrieve auth token: invalid username/password: unauthorized: authentication failed" logger="UnhandledError"
	Aug 28 17:05:21 addons-990097 kubelet[1192]: E0828 17:05:21.573586    1192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: authentication failed\"" pod="default/busybox" podUID="27dca925-9e7b-46e8-b9f4-9b11d07e0de2"
	Aug 28 17:05:21 addons-990097 kubelet[1192]: I0828 17:05:21.879861    1192 scope.go:117] "RemoveContainer" containerID="2854a478340381451b911b5768ee455787c1bbcc9946c76a56e81c7c43402731"
	Aug 28 17:05:22 addons-990097 kubelet[1192]: I0828 17:05:22.408159    1192 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0eadf6-676d-45fe-80d5-d11090925146" path="/var/lib/kubelet/pods/ff0eadf6-676d-45fe-80d5-d11090925146/volumes"
	
	
	==> storage-provisioner [092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6] <==
	I0828 16:52:58.911009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 16:52:58.964276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 16:52:59.019593       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 16:52:59.214120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 16:52:59.226396       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-990097_2298ca45-abf7-4f73-afd1-326d2fb9f78e!
	I0828 16:52:59.227671       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40ef84ee-3904-40bf-b67a-f3ab38dd9ae4", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-990097_2298ca45-abf7-4f73-afd1-326d2fb9f78e became leader
	I0828 16:52:59.636127       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-990097_2298ca45-abf7-4f73-afd1-326d2fb9f78e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-990097 -n addons-990097
helpers_test.go:261: (dbg) Run:  kubectl --context addons-990097 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-990097 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-990097 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-990097/192.168.39.195
	Start Time:       Wed, 28 Aug 2024 16:54:23 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-58r55 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-58r55:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-990097
	  Normal   Pulling    9m24s (x4 over 11m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     9m24s (x4 over 11m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     9m24s (x4 over 11m)  kubelet            Error: ErrImagePull
	  Warning  Failed     9m11s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    53s (x43 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (361.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.687835ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-s6z6n" [3af617c1-2322-4d0f-af32-35d80eaeaf8c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003969457s
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (77.402694ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 9m41.08453238s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (65.880196ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 9m42.656272585s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (64.00416ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 9m48.780505882s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (64.759386ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 9m57.471849567s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (69.536736ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 10m8.969772849s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (63.27683ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 10m20.333260639s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (66.584655ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 10m48.921726542s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (66.53111ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 11m8.949222581s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (62.620236ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 12m11.675913892s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (59.818659ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 12m50.855834449s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (64.696477ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 14m19.230780402s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-990097 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-990097 top pods -n kube-system: exit status 1 (61.434936ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-8gjc6, age: 15m33.820873735s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-990097 -n addons-990097
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-990097 logs -n 25: (1.360532615s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-238617                                                                     | download-only-238617 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| delete  | -p download-only-382773                                                                     | download-only-382773 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-802579 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC |                     |
	|         | binary-mirror-802579                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34799                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-802579                                                                     | binary-mirror-802579 | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:52 UTC |
	| addons  | disable dashboard -p                                                                        | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC |                     |
	|         | addons-990097                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC |                     |
	|         | addons-990097                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-990097 --wait=true                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 16:52 UTC | 28 Aug 24 16:54 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:02 UTC | 28 Aug 24 17:02 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:02 UTC | 28 Aug 24 17:02 UTC |
	|         | addons-990097                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-990097 ssh curl -s                                                                   | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-990097 addons                                                                        | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-990097 addons                                                                        | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-990097 ssh cat                                                                       | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | /opt/local-path-provisioner/pvc-a9f55e23-5044-48c9-a5ea-14e15cbb19c6_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-990097 ip                                                                            | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | -p addons-990097                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | -p addons-990097                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | addons-990097                                                                               |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:03 UTC | 28 Aug 24 17:03 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-990097 ip                                                                            | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-990097 addons disable                                                                | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:05 UTC | 28 Aug 24 17:05 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-990097 addons                                                                        | addons-990097        | jenkins | v1.33.1 | 28 Aug 24 17:08 UTC | 28 Aug 24 17:08 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 16:52:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 16:52:03.553302   18249 out.go:345] Setting OutFile to fd 1 ...
	I0828 16:52:03.553558   18249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:52:03.553567   18249 out.go:358] Setting ErrFile to fd 2...
	I0828 16:52:03.553572   18249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:52:03.554137   18249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 16:52:03.555206   18249 out.go:352] Setting JSON to false
	I0828 16:52:03.556015   18249 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2070,"bootTime":1724861854,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 16:52:03.556070   18249 start.go:139] virtualization: kvm guest
	I0828 16:52:03.557879   18249 out.go:177] * [addons-990097] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 16:52:03.559933   18249 notify.go:220] Checking for updates...
	I0828 16:52:03.559948   18249 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 16:52:03.561141   18249 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 16:52:03.562248   18249 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 16:52:03.563381   18249 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 16:52:03.564522   18249 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 16:52:03.565685   18249 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 16:52:03.567058   18249 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 16:52:03.598505   18249 out.go:177] * Using the kvm2 driver based on user configuration
	I0828 16:52:03.599805   18249 start.go:297] selected driver: kvm2
	I0828 16:52:03.599821   18249 start.go:901] validating driver "kvm2" against <nil>
	I0828 16:52:03.599832   18249 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 16:52:03.600482   18249 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 16:52:03.600546   18249 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 16:52:03.615718   18249 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 16:52:03.615767   18249 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 16:52:03.616004   18249 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 16:52:03.616072   18249 cni.go:84] Creating CNI manager for ""
	I0828 16:52:03.616089   18249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 16:52:03.616099   18249 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 16:52:03.616172   18249 start.go:340] cluster config:
	{Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:52:03.616295   18249 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 16:52:03.618096   18249 out.go:177] * Starting "addons-990097" primary control-plane node in "addons-990097" cluster
	I0828 16:52:03.619317   18249 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 16:52:03.619368   18249 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 16:52:03.619389   18249 cache.go:56] Caching tarball of preloaded images
	I0828 16:52:03.619481   18249 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 16:52:03.619495   18249 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 16:52:03.619843   18249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/config.json ...
	I0828 16:52:03.619867   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/config.json: {Name:mk1d9cf08f8bf0b3aa1979f7c4b7b4ba59401421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:03.620021   18249 start.go:360] acquireMachinesLock for addons-990097: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 16:52:03.620070   18249 start.go:364] duration metric: took 34.81µs to acquireMachinesLock for "addons-990097"
	I0828 16:52:03.620088   18249 start.go:93] Provisioning new machine with config: &{Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 16:52:03.620159   18249 start.go:125] createHost starting for "" (driver="kvm2")
	I0828 16:52:03.622720   18249 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0828 16:52:03.622873   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:03.622908   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:03.637096   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37335
	I0828 16:52:03.637576   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:03.638135   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:03.638159   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:03.638519   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:03.638728   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:03.638904   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:03.639054   18249 start.go:159] libmachine.API.Create for "addons-990097" (driver="kvm2")
	I0828 16:52:03.639083   18249 client.go:168] LocalClient.Create starting
	I0828 16:52:03.639131   18249 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 16:52:03.706793   18249 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 16:52:04.040558   18249 main.go:141] libmachine: Running pre-create checks...
	I0828 16:52:04.040580   18249 main.go:141] libmachine: (addons-990097) Calling .PreCreateCheck
	I0828 16:52:04.041083   18249 main.go:141] libmachine: (addons-990097) Calling .GetConfigRaw
	I0828 16:52:04.041464   18249 main.go:141] libmachine: Creating machine...
	I0828 16:52:04.041477   18249 main.go:141] libmachine: (addons-990097) Calling .Create
	I0828 16:52:04.041686   18249 main.go:141] libmachine: (addons-990097) Creating KVM machine...
	I0828 16:52:04.042940   18249 main.go:141] libmachine: (addons-990097) DBG | found existing default KVM network
	I0828 16:52:04.043689   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.043534   18271 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0828 16:52:04.043707   18249 main.go:141] libmachine: (addons-990097) DBG | created network xml: 
	I0828 16:52:04.043719   18249 main.go:141] libmachine: (addons-990097) DBG | <network>
	I0828 16:52:04.043734   18249 main.go:141] libmachine: (addons-990097) DBG |   <name>mk-addons-990097</name>
	I0828 16:52:04.043744   18249 main.go:141] libmachine: (addons-990097) DBG |   <dns enable='no'/>
	I0828 16:52:04.043754   18249 main.go:141] libmachine: (addons-990097) DBG |   
	I0828 16:52:04.043761   18249 main.go:141] libmachine: (addons-990097) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0828 16:52:04.043768   18249 main.go:141] libmachine: (addons-990097) DBG |     <dhcp>
	I0828 16:52:04.043774   18249 main.go:141] libmachine: (addons-990097) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0828 16:52:04.043781   18249 main.go:141] libmachine: (addons-990097) DBG |     </dhcp>
	I0828 16:52:04.043787   18249 main.go:141] libmachine: (addons-990097) DBG |   </ip>
	I0828 16:52:04.043797   18249 main.go:141] libmachine: (addons-990097) DBG |   
	I0828 16:52:04.043808   18249 main.go:141] libmachine: (addons-990097) DBG | </network>
	I0828 16:52:04.043821   18249 main.go:141] libmachine: (addons-990097) DBG | 
	I0828 16:52:04.048764   18249 main.go:141] libmachine: (addons-990097) DBG | trying to create private KVM network mk-addons-990097 192.168.39.0/24...
	I0828 16:52:04.113488   18249 main.go:141] libmachine: (addons-990097) DBG | private KVM network mk-addons-990097 192.168.39.0/24 created
	I0828 16:52:04.113513   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.113440   18271 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 16:52:04.113526   18249 main.go:141] libmachine: (addons-990097) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097 ...
	I0828 16:52:04.113543   18249 main.go:141] libmachine: (addons-990097) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 16:52:04.113618   18249 main.go:141] libmachine: (addons-990097) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 16:52:04.371432   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.371337   18271 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa...
	I0828 16:52:04.533443   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.533306   18271 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/addons-990097.rawdisk...
	I0828 16:52:04.533482   18249 main.go:141] libmachine: (addons-990097) DBG | Writing magic tar header
	I0828 16:52:04.533524   18249 main.go:141] libmachine: (addons-990097) DBG | Writing SSH key tar header
	I0828 16:52:04.533569   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:04.533458   18271 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097 ...
	I0828 16:52:04.533617   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097
	I0828 16:52:04.533642   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097 (perms=drwx------)
	I0828 16:52:04.533657   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 16:52:04.533672   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 16:52:04.533690   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 16:52:04.533705   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 16:52:04.533713   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home/jenkins
	I0828 16:52:04.533724   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 16:52:04.533737   18249 main.go:141] libmachine: (addons-990097) DBG | Checking permissions on dir: /home
	I0828 16:52:04.533748   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 16:52:04.533762   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 16:52:04.533774   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 16:52:04.533786   18249 main.go:141] libmachine: (addons-990097) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 16:52:04.533798   18249 main.go:141] libmachine: (addons-990097) DBG | Skipping /home - not owner
	I0828 16:52:04.533808   18249 main.go:141] libmachine: (addons-990097) Creating domain...
	I0828 16:52:04.535453   18249 main.go:141] libmachine: (addons-990097) define libvirt domain using xml: 
	I0828 16:52:04.535472   18249 main.go:141] libmachine: (addons-990097) <domain type='kvm'>
	I0828 16:52:04.535482   18249 main.go:141] libmachine: (addons-990097)   <name>addons-990097</name>
	I0828 16:52:04.535497   18249 main.go:141] libmachine: (addons-990097)   <memory unit='MiB'>4000</memory>
	I0828 16:52:04.535505   18249 main.go:141] libmachine: (addons-990097)   <vcpu>2</vcpu>
	I0828 16:52:04.535513   18249 main.go:141] libmachine: (addons-990097)   <features>
	I0828 16:52:04.535525   18249 main.go:141] libmachine: (addons-990097)     <acpi/>
	I0828 16:52:04.535533   18249 main.go:141] libmachine: (addons-990097)     <apic/>
	I0828 16:52:04.535543   18249 main.go:141] libmachine: (addons-990097)     <pae/>
	I0828 16:52:04.535552   18249 main.go:141] libmachine: (addons-990097)     
	I0828 16:52:04.535560   18249 main.go:141] libmachine: (addons-990097)   </features>
	I0828 16:52:04.535573   18249 main.go:141] libmachine: (addons-990097)   <cpu mode='host-passthrough'>
	I0828 16:52:04.535578   18249 main.go:141] libmachine: (addons-990097)   
	I0828 16:52:04.535587   18249 main.go:141] libmachine: (addons-990097)   </cpu>
	I0828 16:52:04.535595   18249 main.go:141] libmachine: (addons-990097)   <os>
	I0828 16:52:04.535599   18249 main.go:141] libmachine: (addons-990097)     <type>hvm</type>
	I0828 16:52:04.535605   18249 main.go:141] libmachine: (addons-990097)     <boot dev='cdrom'/>
	I0828 16:52:04.535610   18249 main.go:141] libmachine: (addons-990097)     <boot dev='hd'/>
	I0828 16:52:04.535620   18249 main.go:141] libmachine: (addons-990097)     <bootmenu enable='no'/>
	I0828 16:52:04.535627   18249 main.go:141] libmachine: (addons-990097)   </os>
	I0828 16:52:04.535632   18249 main.go:141] libmachine: (addons-990097)   <devices>
	I0828 16:52:04.535640   18249 main.go:141] libmachine: (addons-990097)     <disk type='file' device='cdrom'>
	I0828 16:52:04.535673   18249 main.go:141] libmachine: (addons-990097)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/boot2docker.iso'/>
	I0828 16:52:04.535696   18249 main.go:141] libmachine: (addons-990097)       <target dev='hdc' bus='scsi'/>
	I0828 16:52:04.535707   18249 main.go:141] libmachine: (addons-990097)       <readonly/>
	I0828 16:52:04.535719   18249 main.go:141] libmachine: (addons-990097)     </disk>
	I0828 16:52:04.535743   18249 main.go:141] libmachine: (addons-990097)     <disk type='file' device='disk'>
	I0828 16:52:04.535766   18249 main.go:141] libmachine: (addons-990097)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 16:52:04.535787   18249 main.go:141] libmachine: (addons-990097)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/addons-990097.rawdisk'/>
	I0828 16:52:04.535800   18249 main.go:141] libmachine: (addons-990097)       <target dev='hda' bus='virtio'/>
	I0828 16:52:04.535809   18249 main.go:141] libmachine: (addons-990097)     </disk>
	I0828 16:52:04.535822   18249 main.go:141] libmachine: (addons-990097)     <interface type='network'>
	I0828 16:52:04.535834   18249 main.go:141] libmachine: (addons-990097)       <source network='mk-addons-990097'/>
	I0828 16:52:04.535847   18249 main.go:141] libmachine: (addons-990097)       <model type='virtio'/>
	I0828 16:52:04.535857   18249 main.go:141] libmachine: (addons-990097)     </interface>
	I0828 16:52:04.535873   18249 main.go:141] libmachine: (addons-990097)     <interface type='network'>
	I0828 16:52:04.535886   18249 main.go:141] libmachine: (addons-990097)       <source network='default'/>
	I0828 16:52:04.535900   18249 main.go:141] libmachine: (addons-990097)       <model type='virtio'/>
	I0828 16:52:04.535911   18249 main.go:141] libmachine: (addons-990097)     </interface>
	I0828 16:52:04.535920   18249 main.go:141] libmachine: (addons-990097)     <serial type='pty'>
	I0828 16:52:04.535932   18249 main.go:141] libmachine: (addons-990097)       <target port='0'/>
	I0828 16:52:04.535942   18249 main.go:141] libmachine: (addons-990097)     </serial>
	I0828 16:52:04.535953   18249 main.go:141] libmachine: (addons-990097)     <console type='pty'>
	I0828 16:52:04.535965   18249 main.go:141] libmachine: (addons-990097)       <target type='serial' port='0'/>
	I0828 16:52:04.535984   18249 main.go:141] libmachine: (addons-990097)     </console>
	I0828 16:52:04.536000   18249 main.go:141] libmachine: (addons-990097)     <rng model='virtio'>
	I0828 16:52:04.536015   18249 main.go:141] libmachine: (addons-990097)       <backend model='random'>/dev/random</backend>
	I0828 16:52:04.536025   18249 main.go:141] libmachine: (addons-990097)     </rng>
	I0828 16:52:04.536033   18249 main.go:141] libmachine: (addons-990097)     
	I0828 16:52:04.536041   18249 main.go:141] libmachine: (addons-990097)     
	I0828 16:52:04.536047   18249 main.go:141] libmachine: (addons-990097)   </devices>
	I0828 16:52:04.536052   18249 main.go:141] libmachine: (addons-990097) </domain>
	I0828 16:52:04.536066   18249 main.go:141] libmachine: (addons-990097) 
	I0828 16:52:04.542000   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:8a:92:29 in network default
	I0828 16:52:04.542553   18249 main.go:141] libmachine: (addons-990097) Ensuring networks are active...
	I0828 16:52:04.542572   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:04.543276   18249 main.go:141] libmachine: (addons-990097) Ensuring network default is active
	I0828 16:52:04.543557   18249 main.go:141] libmachine: (addons-990097) Ensuring network mk-addons-990097 is active
	I0828 16:52:04.544054   18249 main.go:141] libmachine: (addons-990097) Getting domain xml...
	I0828 16:52:04.544739   18249 main.go:141] libmachine: (addons-990097) Creating domain...
	I0828 16:52:05.926909   18249 main.go:141] libmachine: (addons-990097) Waiting to get IP...
	I0828 16:52:05.927895   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:05.928293   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:05.928329   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:05.928275   18271 retry.go:31] will retry after 307.43588ms: waiting for machine to come up
	I0828 16:52:06.237778   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:06.238168   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:06.238197   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:06.238118   18271 retry.go:31] will retry after 239.740862ms: waiting for machine to come up
	I0828 16:52:06.479526   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:06.479888   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:06.479911   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:06.479872   18271 retry.go:31] will retry after 313.269043ms: waiting for machine to come up
	I0828 16:52:06.794296   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:06.794785   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:06.794809   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:06.794738   18271 retry.go:31] will retry after 569.173838ms: waiting for machine to come up
	I0828 16:52:07.365385   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:07.365805   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:07.365854   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:07.365801   18271 retry.go:31] will retry after 528.42487ms: waiting for machine to come up
	I0828 16:52:07.896190   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:07.896616   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:07.896641   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:07.896567   18271 retry.go:31] will retry after 860.364887ms: waiting for machine to come up
	I0828 16:52:08.758007   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:08.758436   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:08.758461   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:08.758398   18271 retry.go:31] will retry after 735.816889ms: waiting for machine to come up
	I0828 16:52:09.496298   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:09.496737   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:09.496767   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:09.496707   18271 retry.go:31] will retry after 1.098370398s: waiting for machine to come up
	I0828 16:52:10.596985   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:10.597408   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:10.597437   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:10.597359   18271 retry.go:31] will retry after 1.834335212s: waiting for machine to come up
	I0828 16:52:12.434290   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:12.434611   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:12.434633   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:12.434571   18271 retry.go:31] will retry after 2.041065784s: waiting for machine to come up
	I0828 16:52:14.477426   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:14.477916   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:14.477948   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:14.477861   18271 retry.go:31] will retry after 1.984370117s: waiting for machine to come up
	I0828 16:52:16.464891   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:16.465274   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:16.465295   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:16.465230   18271 retry.go:31] will retry after 3.029154804s: waiting for machine to come up
	I0828 16:52:19.496261   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:19.496603   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:19.496625   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:19.496589   18271 retry.go:31] will retry after 3.151315591s: waiting for machine to come up
	I0828 16:52:22.651764   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:22.652112   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find current IP address of domain addons-990097 in network mk-addons-990097
	I0828 16:52:22.652134   18249 main.go:141] libmachine: (addons-990097) DBG | I0828 16:52:22.652073   18271 retry.go:31] will retry after 4.012346275s: waiting for machine to come up
	I0828 16:52:26.667962   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:26.668404   18249 main.go:141] libmachine: (addons-990097) Found IP for machine: 192.168.39.195
	I0828 16:52:26.668422   18249 main.go:141] libmachine: (addons-990097) Reserving static IP address...
	I0828 16:52:26.668433   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has current primary IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:26.668824   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find host DHCP lease matching {name: "addons-990097", mac: "52:54:00:36:9e:33", ip: "192.168.39.195"} in network mk-addons-990097
	I0828 16:52:26.740976   18249 main.go:141] libmachine: (addons-990097) DBG | Getting to WaitForSSH function...
	I0828 16:52:26.741009   18249 main.go:141] libmachine: (addons-990097) Reserved static IP address: 192.168.39.195
	I0828 16:52:26.741023   18249 main.go:141] libmachine: (addons-990097) Waiting for SSH to be available...
	I0828 16:52:26.743441   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:26.743738   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097
	I0828 16:52:26.743775   18249 main.go:141] libmachine: (addons-990097) DBG | unable to find defined IP address of network mk-addons-990097 interface with MAC address 52:54:00:36:9e:33
	I0828 16:52:26.743951   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH client type: external
	I0828 16:52:26.743968   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa (-rw-------)
	I0828 16:52:26.743999   18249 main.go:141] libmachine: (addons-990097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 16:52:26.744010   18249 main.go:141] libmachine: (addons-990097) DBG | About to run SSH command:
	I0828 16:52:26.744026   18249 main.go:141] libmachine: (addons-990097) DBG | exit 0
	I0828 16:52:26.754106   18249 main.go:141] libmachine: (addons-990097) DBG | SSH cmd err, output: exit status 255: 
	I0828 16:52:26.754130   18249 main.go:141] libmachine: (addons-990097) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0828 16:52:26.754137   18249 main.go:141] libmachine: (addons-990097) DBG | command : exit 0
	I0828 16:52:26.754143   18249 main.go:141] libmachine: (addons-990097) DBG | err     : exit status 255
	I0828 16:52:26.754151   18249 main.go:141] libmachine: (addons-990097) DBG | output  : 
	I0828 16:52:29.754760   18249 main.go:141] libmachine: (addons-990097) DBG | Getting to WaitForSSH function...
	I0828 16:52:29.757068   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.757372   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:29.757400   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.757503   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH client type: external
	I0828 16:52:29.757540   18249 main.go:141] libmachine: (addons-990097) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa (-rw-------)
	I0828 16:52:29.757562   18249 main.go:141] libmachine: (addons-990097) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.195 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 16:52:29.757572   18249 main.go:141] libmachine: (addons-990097) DBG | About to run SSH command:
	I0828 16:52:29.757582   18249 main.go:141] libmachine: (addons-990097) DBG | exit 0
	I0828 16:52:29.877937   18249 main.go:141] libmachine: (addons-990097) DBG | SSH cmd err, output: <nil>: 
	I0828 16:52:29.878225   18249 main.go:141] libmachine: (addons-990097) KVM machine creation complete!
	I0828 16:52:29.878543   18249 main.go:141] libmachine: (addons-990097) Calling .GetConfigRaw
	I0828 16:52:29.879088   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:29.879264   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:29.879423   18249 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 16:52:29.879439   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:29.880692   18249 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 16:52:29.880710   18249 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 16:52:29.880719   18249 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 16:52:29.880732   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:29.882838   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.883224   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:29.883254   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.883344   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:29.883507   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.883658   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.883823   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:29.884002   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:29.884174   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:29.884185   18249 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 16:52:29.985509   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 16:52:29.985528   18249 main.go:141] libmachine: Detecting the provisioner...
	I0828 16:52:29.985535   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:29.988176   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.988502   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:29.988544   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:29.988718   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:29.988926   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.989088   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:29.989208   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:29.989336   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:29.989559   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:29.989571   18249 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 16:52:30.090732   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 16:52:30.090818   18249 main.go:141] libmachine: found compatible host: buildroot
	I0828 16:52:30.090830   18249 main.go:141] libmachine: Provisioning with buildroot...
	I0828 16:52:30.090838   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:30.091074   18249 buildroot.go:166] provisioning hostname "addons-990097"
	I0828 16:52:30.091095   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:30.091265   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.094119   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.094571   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.094674   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.094784   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.094970   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.095160   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.095304   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.095507   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.095700   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.095717   18249 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-990097 && echo "addons-990097" | sudo tee /etc/hostname
	I0828 16:52:30.212118   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-990097
	
	I0828 16:52:30.212145   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.214848   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.215331   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.215363   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.215707   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.215913   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.216104   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.216244   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.216447   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.216630   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.216653   18249 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-990097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-990097/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-990097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 16:52:30.326941   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 16:52:30.326969   18249 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 16:52:30.326993   18249 buildroot.go:174] setting up certificates
	I0828 16:52:30.327005   18249 provision.go:84] configureAuth start
	I0828 16:52:30.327014   18249 main.go:141] libmachine: (addons-990097) Calling .GetMachineName
	I0828 16:52:30.327328   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:30.330236   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.330668   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.330698   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.330848   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.332951   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.333214   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.333255   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.333377   18249 provision.go:143] copyHostCerts
	I0828 16:52:30.333453   18249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 16:52:30.333574   18249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 16:52:30.333649   18249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 16:52:30.333709   18249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.addons-990097 san=[127.0.0.1 192.168.39.195 addons-990097 localhost minikube]
	I0828 16:52:30.457282   18249 provision.go:177] copyRemoteCerts
	I0828 16:52:30.457342   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 16:52:30.457365   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.460211   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.460550   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.460584   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.460756   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.460951   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.461115   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.461336   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:30.544126   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 16:52:30.567154   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0828 16:52:30.592366   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 16:52:30.617237   18249 provision.go:87] duration metric: took 290.219862ms to configureAuth
	I0828 16:52:30.617267   18249 buildroot.go:189] setting minikube options for container-runtime
	I0828 16:52:30.617448   18249 config.go:182] Loaded profile config "addons-990097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 16:52:30.617548   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.619914   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.620221   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.620254   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.620425   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.620640   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.620783   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.620914   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.621107   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.621256   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.621270   18249 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 16:52:30.848003   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 16:52:30.848031   18249 main.go:141] libmachine: Checking connection to Docker...
	I0828 16:52:30.848042   18249 main.go:141] libmachine: (addons-990097) Calling .GetURL
	I0828 16:52:30.849229   18249 main.go:141] libmachine: (addons-990097) DBG | Using libvirt version 6000000
	I0828 16:52:30.851198   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.851502   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.851525   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.851678   18249 main.go:141] libmachine: Docker is up and running!
	I0828 16:52:30.851690   18249 main.go:141] libmachine: Reticulating splines...
	I0828 16:52:30.851696   18249 client.go:171] duration metric: took 27.21260345s to LocalClient.Create
	I0828 16:52:30.851716   18249 start.go:167] duration metric: took 27.212664809s to libmachine.API.Create "addons-990097"
	I0828 16:52:30.851725   18249 start.go:293] postStartSetup for "addons-990097" (driver="kvm2")
	I0828 16:52:30.851734   18249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 16:52:30.851750   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:30.851973   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 16:52:30.851995   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.853964   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.854285   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.854301   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.854478   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.854647   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.854805   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.854935   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:30.935753   18249 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 16:52:30.939610   18249 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 16:52:30.939637   18249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 16:52:30.939732   18249 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 16:52:30.939770   18249 start.go:296] duration metric: took 88.03849ms for postStartSetup
	I0828 16:52:30.939814   18249 main.go:141] libmachine: (addons-990097) Calling .GetConfigRaw
	I0828 16:52:30.940381   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:30.942790   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.943103   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.943132   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.943312   18249 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/config.json ...
	I0828 16:52:30.943514   18249 start.go:128] duration metric: took 27.323344868s to createHost
	I0828 16:52:30.943546   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:30.945603   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.945953   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:30.945978   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:30.946156   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:30.946323   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.946607   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:30.946786   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:30.946957   18249 main.go:141] libmachine: Using SSH client type: native
	I0828 16:52:30.947128   18249 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.195 22 <nil> <nil>}
	I0828 16:52:30.947143   18249 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 16:52:31.050660   18249 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724863951.031106642
	
	I0828 16:52:31.050686   18249 fix.go:216] guest clock: 1724863951.031106642
	I0828 16:52:31.050696   18249 fix.go:229] Guest: 2024-08-28 16:52:31.031106642 +0000 UTC Remote: 2024-08-28 16:52:30.943527716 +0000 UTC m=+27.423947828 (delta=87.578926ms)
	I0828 16:52:31.050749   18249 fix.go:200] guest clock delta is within tolerance: 87.578926ms
	I0828 16:52:31.050759   18249 start.go:83] releasing machines lock for "addons-990097", held for 27.430678011s
	I0828 16:52:31.050790   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.051040   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:31.053422   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.053797   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:31.053831   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.053954   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.054408   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.054525   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:31.054615   18249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 16:52:31.054667   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:31.054710   18249 ssh_runner.go:195] Run: cat /version.json
	I0828 16:52:31.054729   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:31.057139   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057472   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057561   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:31.057604   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057752   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:31.057882   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:31.057908   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:31.057911   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:31.058061   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:31.058069   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:31.058230   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:31.058334   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:31.058301   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:31.058460   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:31.130356   18249 ssh_runner.go:195] Run: systemctl --version
	I0828 16:52:31.176423   18249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 16:52:31.331223   18249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 16:52:31.337047   18249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 16:52:31.337126   18249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 16:52:31.352067   18249 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 16:52:31.352090   18249 start.go:495] detecting cgroup driver to use...
	I0828 16:52:31.352154   18249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 16:52:31.366292   18249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 16:52:31.378875   18249 docker.go:217] disabling cri-docker service (if available) ...
	I0828 16:52:31.378945   18249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 16:52:31.391391   18249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 16:52:31.403829   18249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 16:52:31.515593   18249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 16:52:31.658525   18249 docker.go:233] disabling docker service ...
	I0828 16:52:31.658598   18249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 16:52:31.672788   18249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 16:52:31.684923   18249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 16:52:31.832671   18249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 16:52:31.955950   18249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 16:52:31.968509   18249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 16:52:31.985170   18249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 16:52:31.985222   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:31.994290   18249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 16:52:31.994356   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.003644   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.012976   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.022206   18249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 16:52:32.031981   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.041468   18249 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.056996   18249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 16:52:32.066128   18249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 16:52:32.074610   18249 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 16:52:32.074673   18249 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 16:52:32.086779   18249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 16:52:32.095844   18249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:32.217079   18249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 16:52:32.305084   18249 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 16:52:32.305166   18249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 16:52:32.309450   18249 start.go:563] Will wait 60s for crictl version
	I0828 16:52:32.309525   18249 ssh_runner.go:195] Run: which crictl
	I0828 16:52:32.312948   18249 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 16:52:32.349653   18249 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 16:52:32.349768   18249 ssh_runner.go:195] Run: crio --version
	I0828 16:52:32.374953   18249 ssh_runner.go:195] Run: crio --version
	I0828 16:52:32.403065   18249 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 16:52:32.404404   18249 main.go:141] libmachine: (addons-990097) Calling .GetIP
	I0828 16:52:32.406839   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:32.407142   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:32.407172   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:32.407345   18249 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 16:52:32.411258   18249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 16:52:32.422553   18249 kubeadm.go:883] updating cluster {Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 16:52:32.422662   18249 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 16:52:32.422725   18249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 16:52:32.452295   18249 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 16:52:32.452389   18249 ssh_runner.go:195] Run: which lz4
	I0828 16:52:32.455957   18249 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 16:52:32.459683   18249 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 16:52:32.459715   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 16:52:33.619457   18249 crio.go:462] duration metric: took 1.163529047s to copy over tarball
	I0828 16:52:33.619537   18249 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 16:52:35.728451   18249 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.108883425s)
	I0828 16:52:35.728489   18249 crio.go:469] duration metric: took 2.108993771s to extract the tarball
	I0828 16:52:35.728498   18249 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 16:52:35.764177   18249 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 16:52:35.805986   18249 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 16:52:35.806013   18249 cache_images.go:84] Images are preloaded, skipping loading
	I0828 16:52:35.806024   18249 kubeadm.go:934] updating node { 192.168.39.195 8443 v1.31.0 crio true true} ...
	I0828 16:52:35.806169   18249 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-990097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.195
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 16:52:35.806256   18249 ssh_runner.go:195] Run: crio config
	I0828 16:52:35.847424   18249 cni.go:84] Creating CNI manager for ""
	I0828 16:52:35.847444   18249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 16:52:35.847453   18249 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 16:52:35.847477   18249 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.195 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-990097 NodeName:addons-990097 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.195"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.195 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 16:52:35.847617   18249 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.195
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-990097"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.195
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.195"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 16:52:35.847688   18249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 16:52:35.857307   18249 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 16:52:35.857386   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 16:52:35.866414   18249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0828 16:52:35.882622   18249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 16:52:35.898146   18249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0828 16:52:35.913810   18249 ssh_runner.go:195] Run: grep 192.168.39.195	control-plane.minikube.internal$ /etc/hosts
	I0828 16:52:35.917387   18249 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.195	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 16:52:35.928840   18249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:36.068112   18249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 16:52:36.084575   18249 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097 for IP: 192.168.39.195
	I0828 16:52:36.084599   18249 certs.go:194] generating shared ca certs ...
	I0828 16:52:36.084619   18249 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.084764   18249 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 16:52:36.178723   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt ...
	I0828 16:52:36.178750   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt: {Name:mkca0e9fa435263e5e1973904de7411404a3b5c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.178894   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key ...
	I0828 16:52:36.178904   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key: {Name:mke8d9e9bf1fb5b7a824f6128a8a0000adba5a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.178971   18249 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 16:52:36.394826   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt ...
	I0828 16:52:36.394851   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt: {Name:mk69004c7e13f3376a06f0abafef4bde08b0d3e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.395002   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key ...
	I0828 16:52:36.395013   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key: {Name:mk5411c4aa0dbd29b19b8133f87fa65318c7ad4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.395070   18249 certs.go:256] generating profile certs ...
	I0828 16:52:36.395115   18249 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.key
	I0828 16:52:36.395137   18249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt with IP's: []
	I0828 16:52:36.439668   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt ...
	I0828 16:52:36.439694   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: {Name:mk453035261c38191e0ffde93aa6fa8d406cfb43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.439845   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.key ...
	I0828 16:52:36.439856   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.key: {Name:mkb125df58df3f8011bf26153ac05fdbffab3c48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.439917   18249 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd
	I0828 16:52:36.439934   18249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.195]
	I0828 16:52:36.539648   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd ...
	I0828 16:52:36.539677   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd: {Name:mk71f54c0b4de61e9c2536a122a940b588dc9501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.539818   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd ...
	I0828 16:52:36.539830   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd: {Name:mk45632fbbb3bbcb64891cfc4bf3dbd6f6b7d794 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.539890   18249 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt.dd815ccd -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt
	I0828 16:52:36.539962   18249 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key.dd815ccd -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key
	I0828 16:52:36.540013   18249 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key
	I0828 16:52:36.540031   18249 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt with IP's: []
	I0828 16:52:36.667048   18249 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt ...
	I0828 16:52:36.667076   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt: {Name:mkd4b5d49bf60b646d45ef076f74b004c8164a16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.667220   18249 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key ...
	I0828 16:52:36.667230   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key: {Name:mk17bb6cc5d80faf4d912b3341e01d7aaac69711 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:36.667389   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 16:52:36.667426   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 16:52:36.667452   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 16:52:36.667474   18249 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 16:52:36.668075   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 16:52:36.690924   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 16:52:36.712111   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 16:52:36.733708   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 16:52:36.764815   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0828 16:52:36.792036   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 16:52:36.815658   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 16:52:36.836525   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 16:52:36.857449   18249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 16:52:36.878273   18249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 16:52:36.893346   18249 ssh_runner.go:195] Run: openssl version
	I0828 16:52:36.899004   18249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 16:52:36.909101   18249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:36.913722   18249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:36.913785   18249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 16:52:36.919726   18249 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 16:52:36.930086   18249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 16:52:36.933924   18249 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 16:52:36.933973   18249 kubeadm.go:392] StartCluster: {Name:addons-990097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-990097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:52:36.934057   18249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 16:52:36.934128   18249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 16:52:36.968169   18249 cri.go:89] found id: ""
	I0828 16:52:36.968234   18249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 16:52:36.977317   18249 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 16:52:36.985866   18249 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 16:52:36.994431   18249 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 16:52:36.994459   18249 kubeadm.go:157] found existing configuration files:
	
	I0828 16:52:36.994509   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 16:52:37.004030   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 16:52:37.004090   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 16:52:37.012639   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 16:52:37.020830   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 16:52:37.020889   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 16:52:37.029469   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 16:52:37.037402   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 16:52:37.037462   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 16:52:37.045618   18249 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 16:52:37.053640   18249 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 16:52:37.053694   18249 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 16:52:37.061952   18249 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 16:52:37.112124   18249 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 16:52:37.112242   18249 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 16:52:37.208201   18249 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 16:52:37.208348   18249 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 16:52:37.208461   18249 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 16:52:37.215232   18249 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 16:52:37.218733   18249 out.go:235]   - Generating certificates and keys ...
	I0828 16:52:37.218826   18249 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 16:52:37.219027   18249 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 16:52:37.494799   18249 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 16:52:37.692765   18249 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 16:52:37.856293   18249 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 16:52:38.009127   18249 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 16:52:38.187901   18249 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 16:52:38.188087   18249 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-990097 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0828 16:52:38.477231   18249 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 16:52:38.477411   18249 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-990097 localhost] and IPs [192.168.39.195 127.0.0.1 ::1]
	I0828 16:52:38.539600   18249 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 16:52:39.008399   18249 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 16:52:39.328471   18249 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 16:52:39.328600   18249 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 16:52:39.560006   18249 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 16:52:39.701891   18249 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 16:52:39.854713   18249 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 16:52:39.961910   18249 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 16:52:40.053380   18249 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 16:52:40.053922   18249 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 16:52:40.056435   18249 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 16:52:40.058106   18249 out.go:235]   - Booting up control plane ...
	I0828 16:52:40.058200   18249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 16:52:40.058271   18249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 16:52:40.058614   18249 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 16:52:40.072832   18249 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 16:52:40.080336   18249 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 16:52:40.080381   18249 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 16:52:40.199027   18249 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 16:52:40.199152   18249 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 16:52:40.701214   18249 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.364407ms
	I0828 16:52:40.701332   18249 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 16:52:45.701403   18249 kubeadm.go:310] [api-check] The API server is healthy after 5.001374073s
	I0828 16:52:45.711899   18249 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 16:52:45.729058   18249 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 16:52:45.759777   18249 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 16:52:45.759972   18249 kubeadm.go:310] [mark-control-plane] Marking the node addons-990097 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 16:52:45.773435   18249 kubeadm.go:310] [bootstrap-token] Using token: m82lde.zyra1pfrkjoxeehr
	I0828 16:52:45.775077   18249 out.go:235]   - Configuring RBAC rules ...
	I0828 16:52:45.775231   18249 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 16:52:45.781955   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 16:52:45.791540   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 16:52:45.798883   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 16:52:45.803511   18249 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 16:52:45.808700   18249 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 16:52:46.106541   18249 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 16:52:46.534310   18249 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 16:52:47.106029   18249 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 16:52:47.107541   18249 kubeadm.go:310] 
	I0828 16:52:47.107598   18249 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 16:52:47.107633   18249 kubeadm.go:310] 
	I0828 16:52:47.107764   18249 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 16:52:47.107778   18249 kubeadm.go:310] 
	I0828 16:52:47.107809   18249 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 16:52:47.107871   18249 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 16:52:47.107961   18249 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 16:52:47.107982   18249 kubeadm.go:310] 
	I0828 16:52:47.108056   18249 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 16:52:47.108065   18249 kubeadm.go:310] 
	I0828 16:52:47.108133   18249 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 16:52:47.108140   18249 kubeadm.go:310] 
	I0828 16:52:47.108179   18249 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 16:52:47.108239   18249 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 16:52:47.108335   18249 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 16:52:47.108350   18249 kubeadm.go:310] 
	I0828 16:52:47.108499   18249 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 16:52:47.108627   18249 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 16:52:47.108638   18249 kubeadm.go:310] 
	I0828 16:52:47.108765   18249 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m82lde.zyra1pfrkjoxeehr \
	I0828 16:52:47.108914   18249 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 16:52:47.108948   18249 kubeadm.go:310] 	--control-plane 
	I0828 16:52:47.108962   18249 kubeadm.go:310] 
	I0828 16:52:47.109095   18249 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 16:52:47.109106   18249 kubeadm.go:310] 
	I0828 16:52:47.109197   18249 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m82lde.zyra1pfrkjoxeehr \
	I0828 16:52:47.109291   18249 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 16:52:47.110506   18249 kubeadm.go:310] W0828 16:52:37.095179     808 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 16:52:47.110904   18249 kubeadm.go:310] W0828 16:52:37.096135     808 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 16:52:47.111022   18249 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 16:52:47.111047   18249 cni.go:84] Creating CNI manager for ""
	I0828 16:52:47.111061   18249 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 16:52:47.113714   18249 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 16:52:47.114865   18249 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 16:52:47.125045   18249 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 16:52:47.141868   18249 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 16:52:47.141994   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:47.142013   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-990097 minikube.k8s.io/updated_at=2024_08_28T16_52_47_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=addons-990097 minikube.k8s.io/primary=true
	I0828 16:52:47.167583   18249 ops.go:34] apiserver oom_adj: -16
	I0828 16:52:47.253359   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:47.754084   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:48.254277   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:48.754104   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:49.254023   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:49.753456   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:50.254174   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:50.753691   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:51.254102   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:51.754161   18249 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 16:52:51.862424   18249 kubeadm.go:1113] duration metric: took 4.720462069s to wait for elevateKubeSystemPrivileges
	I0828 16:52:51.862469   18249 kubeadm.go:394] duration metric: took 14.928497866s to StartCluster
	I0828 16:52:51.862492   18249 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:51.862622   18249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 16:52:51.863098   18249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 16:52:51.863295   18249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0828 16:52:51.863324   18249 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.195 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 16:52:51.863367   18249 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0828 16:52:51.863461   18249 addons.go:69] Setting default-storageclass=true in profile "addons-990097"
	I0828 16:52:51.863473   18249 addons.go:69] Setting registry=true in profile "addons-990097"
	I0828 16:52:51.863476   18249 addons.go:69] Setting metrics-server=true in profile "addons-990097"
	I0828 16:52:51.863499   18249 addons.go:234] Setting addon registry=true in "addons-990097"
	I0828 16:52:51.863492   18249 addons.go:69] Setting cloud-spanner=true in profile "addons-990097"
	I0828 16:52:51.863506   18249 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-990097"
	I0828 16:52:51.863519   18249 addons.go:234] Setting addon metrics-server=true in "addons-990097"
	I0828 16:52:51.863529   18249 addons.go:234] Setting addon cloud-spanner=true in "addons-990097"
	I0828 16:52:51.863531   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863549   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863561   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863562   18249 config.go:182] Loaded profile config "addons-990097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 16:52:51.863607   18249 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-990097"
	I0828 16:52:51.863654   18249 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-990097"
	I0828 16:52:51.863678   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863908   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.863921   18249 addons.go:69] Setting ingress=true in profile "addons-990097"
	I0828 16:52:51.863926   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.863933   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.863938   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.863948   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.863953   18249 addons.go:234] Setting addon ingress=true in "addons-990097"
	I0828 16:52:51.863964   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.863982   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.863460   18249 addons.go:69] Setting yakd=true in profile "addons-990097"
	I0828 16:52:51.864018   18249 addons.go:69] Setting ingress-dns=true in profile "addons-990097"
	I0828 16:52:51.864030   18249 addons.go:69] Setting storage-provisioner=true in profile "addons-990097"
	I0828 16:52:51.864038   18249 addons.go:234] Setting addon yakd=true in "addons-990097"
	I0828 16:52:51.864041   18249 addons.go:234] Setting addon ingress-dns=true in "addons-990097"
	I0828 16:52:51.864048   18249 addons.go:234] Setting addon storage-provisioner=true in "addons-990097"
	I0828 16:52:51.864050   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864057   18249 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-990097"
	I0828 16:52:51.864061   18249 addons.go:69] Setting gcp-auth=true in profile "addons-990097"
	I0828 16:52:51.864068   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864068   18249 addons.go:69] Setting helm-tiller=true in profile "addons-990097"
	I0828 16:52:51.864081   18249 mustload.go:65] Loading cluster: addons-990097
	I0828 16:52:51.864058   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864087   18249 addons.go:234] Setting addon helm-tiller=true in "addons-990097"
	I0828 16:52:51.864092   18249 addons.go:69] Setting volumesnapshots=true in profile "addons-990097"
	I0828 16:52:51.864087   18249 addons.go:69] Setting volcano=true in profile "addons-990097"
	I0828 16:52:51.864105   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864108   18249 addons.go:234] Setting addon volumesnapshots=true in "addons-990097"
	I0828 16:52:51.863468   18249 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-990097"
	I0828 16:52:51.864138   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864111   18249 addons.go:234] Setting addon volcano=true in "addons-990097"
	I0828 16:52:51.864171   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864297   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864336   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864434   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864142   18249 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-990097"
	I0828 16:52:51.864465   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864046   18249 addons.go:69] Setting inspektor-gadget=true in profile "addons-990097"
	I0828 16:52:51.864493   18249 addons.go:234] Setting addon inspektor-gadget=true in "addons-990097"
	I0828 16:52:51.864543   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864568   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864591   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864798   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864877   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864896   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864905   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864929   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864937   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864955   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864572   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.864983   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.864081   18249 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-990097"
	I0828 16:52:51.865148   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.865166   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.865240   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.864545   18249 config.go:182] Loaded profile config "addons-990097": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 16:52:51.865295   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.865352   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.865431   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.865521   18249 out.go:177] * Verifying Kubernetes components...
	I0828 16:52:51.867093   18249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 16:52:51.885199   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0828 16:52:51.885477   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36179
	I0828 16:52:51.885492   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33793
	I0828 16:52:51.885750   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.885755   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0828 16:52:51.885989   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.886219   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.886558   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.886581   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.886580   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.886688   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.886708   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.886724   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.886737   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.887264   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887324   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887350   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.887362   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.887907   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.887933   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887944   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.887912   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.887987   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.888013   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37545
	I0828 16:52:51.889234   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38429
	I0828 16:52:51.890397   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890420   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890438   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.890452   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.890533   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890558   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.890684   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.890713   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.891153   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.891189   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.892730   18249 addons.go:234] Setting addon default-storageclass=true in "addons-990097"
	I0828 16:52:51.892913   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.893285   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.893322   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.894924   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.894976   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.895458   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.895475   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.895521   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.895542   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.895836   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.895884   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.896367   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.896400   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.896408   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.896431   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.920845   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46817
	I0828 16:52:51.921517   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.922235   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.922257   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.922922   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.923553   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.923595   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.928048   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38141
	I0828 16:52:51.928224   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37103
	I0828 16:52:51.928543   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.928629   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.928995   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.929011   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.929139   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.929150   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.929913   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.930496   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.930519   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.930739   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0828 16:52:51.930776   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45579
	I0828 16:52:51.931018   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.931228   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.931311   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.931596   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.931633   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.932148   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.932168   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.932316   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.932335   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.932583   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.932657   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.933177   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.933214   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.933573   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.934348   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46147
	I0828 16:52:51.934983   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.935496   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.935514   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.935540   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33497
	I0828 16:52:51.935941   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.936141   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.936211   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.936686   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.936702   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.937184   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.937607   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:51.937779   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.937810   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.938264   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.939007   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.939053   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.940198   18249 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 16:52:51.941257   18249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:51.941275   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 16:52:51.941294   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.945245   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.945869   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.945889   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.946114   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.946297   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.946469   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.947368   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.948243   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42589
	I0828 16:52:51.948630   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.949142   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.949159   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.949494   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.949670   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.951300   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.953224   18249 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0828 16:52:51.954643   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 16:52:51.954663   18249 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 16:52:51.954691   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.958105   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.958534   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.958558   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.960564   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38031
	I0828 16:52:51.960712   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43619
	I0828 16:52:51.960811   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.961092   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.961160   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41831
	I0828 16:52:51.961463   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.961645   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.962144   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.962212   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.962501   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.962836   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.962852   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.962967   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.962980   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.963302   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.963364   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.963916   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.963951   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.964787   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.964813   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.966540   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.966566   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.966978   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.967204   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.969078   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.970741   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0828 16:52:51.971825   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0828 16:52:51.973044   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0828 16:52:51.973220   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I0828 16:52:51.973630   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.974106   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.974125   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.974525   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.974714   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.975169   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39457
	I0828 16:52:51.975891   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.976592   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.976607   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.976669   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.976985   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.977259   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.977312   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46789
	I0828 16:52:51.977724   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.978015   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0828 16:52:51.978118   18249 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0828 16:52:51.978190   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.978212   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.978519   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.978701   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.979345   18249 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0828 16:52:51.979360   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0828 16:52:51.979379   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.980554   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0828 16:52:51.980864   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.981133   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42397
	I0828 16:52:51.981800   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.982285   18249 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0828 16:52:51.982336   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0828 16:52:51.982805   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.982823   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.983085   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.983524   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.983649   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.983860   18249 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0828 16:52:51.983880   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0828 16:52:51.983898   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.984188   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.984214   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.984253   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.984424   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0828 16:52:51.984488   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.985059   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0828 16:52:51.986056   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0828 16:52:51.986113   18249 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0828 16:52:51.986133   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.986876   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.986944   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39637
	I0828 16:52:51.987233   18249 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0828 16:52:51.987277   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.987408   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.988124   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.988172   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0828 16:52:51.988183   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0828 16:52:51.988201   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.988609   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.988624   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.989053   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.989096   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.989270   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.989445   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46785
	I0828 16:52:51.989794   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.989811   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.989923   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.990447   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.990496   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.990539   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35003
	I0828 16:52:51.990826   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.990852   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.990950   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.990969   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.991161   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:51.991400   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.991419   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.991421   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.991402   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.991650   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.991758   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.991824   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.992071   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.992286   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.992541   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.992569   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:51.992585   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:51.992850   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:51.992917   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:51.992930   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:51.992959   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:51.992978   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:51.992986   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:51.992997   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:51.993004   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:51.993157   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:51.993194   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:51.993202   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:51.993228   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	W0828 16:52:51.993270   18249 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0828 16:52:51.993367   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:51.993502   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:51.993600   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:51.994634   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0828 16:52:51.994968   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.995300   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:51.995803   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.995829   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.996150   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:51.996660   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:51.996695   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:51.996700   18249 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0828 16:52:51.998323   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0828 16:52:51.998854   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:51.999173   18249 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0828 16:52:51.999191   18249 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0828 16:52:51.999209   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:51.999355   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:51.999375   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:51.999733   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.000029   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.000074   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38625
	I0828 16:52:52.000535   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.000555   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.000620   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.001158   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.001173   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.001242   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I0828 16:52:52.001533   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.001840   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.001919   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.002585   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.002779   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.003130   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:52.003158   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:52.003646   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.003664   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.003919   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.004173   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.004207   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.004302   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.004721   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.004745   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.004915   18249 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0828 16:52:52.005124   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.005449   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.005556   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I0828 16:52:52.005574   18249 out.go:177]   - Using image docker.io/registry:2.8.3
	I0828 16:52:52.005964   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.006575   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.006739   18249 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 16:52:52.006749   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0828 16:52:52.006762   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.007011   18249 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-990097"
	I0828 16:52:52.007047   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:52.007210   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.007223   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.007395   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:52.007635   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.007947   18249 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0828 16:52:52.008055   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.008153   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:52.008529   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.009065   18249 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0828 16:52:52.009079   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0828 16:52:52.009091   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.010799   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.011239   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.011257   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.011423   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.011668   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.011806   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.011928   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.012452   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.013295   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.013770   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.013865   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.013823   18249 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0828 16:52:52.013979   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.014280   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.014407   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.014585   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.015267   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0828 16:52:52.015319   18249 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0828 16:52:52.015347   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.018874   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43185
	I0828 16:52:52.019066   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.019382   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.019521   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.019539   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.019711   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.019861   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.020082   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.020241   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.020251   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.020261   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.020835   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.021022   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.021132   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33533
	I0828 16:52:52.021489   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.022124   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.022148   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.022508   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.022715   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.022934   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.024013   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0828 16:52:52.024558   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.026046   18249 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0828 16:52:52.026047   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:52:52.027328   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:52:52.027344   18249 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 16:52:52.027383   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0828 16:52:52.027410   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.028651   18249 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 16:52:52.028667   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0828 16:52:52.028681   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.031130   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.031559   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.031573   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.031751   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.031908   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.032036   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.032165   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.032716   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.033159   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I0828 16:52:52.033303   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.033338   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.033379   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.033428   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45751
	I0828 16:52:52.033563   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.033754   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.033781   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.033785   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.034222   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.034240   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.034224   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.034253   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:52.034269   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.034593   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.034635   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.034793   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.035047   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:52.035083   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:52.036108   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.036365   18249 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:52.036381   18249 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 16:52:52.036396   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.039229   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.039626   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.039642   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.039793   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.039933   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.040034   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.040110   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	W0828 16:52:52.050832   18249 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52966->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.050853   18249 retry.go:31] will retry after 265.877478ms: ssh: handshake failed: read tcp 192.168.39.1:52966->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.065458   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42997
	I0828 16:52:52.065895   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:52.066365   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:52.066389   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:52.066695   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:52.066934   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:52.068686   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:52.070267   18249 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0828 16:52:52.071813   18249 out.go:177]   - Using image docker.io/busybox:stable
	I0828 16:52:52.072975   18249 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 16:52:52.073002   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0828 16:52:52.073024   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:52.076493   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.076991   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:52.077021   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:52.077115   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:52.077290   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:52.077439   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:52.077557   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	W0828 16:52:52.078345   18249 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52982->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.078374   18249 retry.go:31] will retry after 279.535479ms: ssh: handshake failed: read tcp 192.168.39.1:52982->192.168.39.195:22: read: connection reset by peer
	I0828 16:52:52.457264   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0828 16:52:52.472106   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0828 16:52:52.472127   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0828 16:52:52.472898   18249 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0828 16:52:52.472911   18249 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0828 16:52:52.477184   18249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 16:52:52.477383   18249 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0828 16:52:52.564015   18249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0828 16:52:52.564048   18249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0828 16:52:52.575756   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 16:52:52.575777   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0828 16:52:52.585811   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 16:52:52.590531   18249 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0828 16:52:52.590558   18249 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0828 16:52:52.594784   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 16:52:52.613114   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0828 16:52:52.613137   18249 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0828 16:52:52.618849   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 16:52:52.630514   18249 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0828 16:52:52.630548   18249 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0828 16:52:52.680492   18249 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0828 16:52:52.680511   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0828 16:52:52.683692   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 16:52:52.711921   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0828 16:52:52.711950   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0828 16:52:52.758918   18249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0828 16:52:52.758942   18249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0828 16:52:52.772563   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 16:52:52.772585   18249 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 16:52:52.783118   18249 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0828 16:52:52.783140   18249 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0828 16:52:52.784569   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 16:52:52.809813   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0828 16:52:52.809848   18249 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0828 16:52:52.826609   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0828 16:52:52.836825   18249 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0828 16:52:52.836855   18249 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0828 16:52:52.857767   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0828 16:52:52.857793   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0828 16:52:52.867452   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 16:52:52.903663   18249 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0828 16:52:52.903735   18249 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0828 16:52:52.914976   18249 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0828 16:52:52.914995   18249 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0828 16:52:52.980163   18249 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 16:52:52.980191   18249 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 16:52:52.984211   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0828 16:52:52.984228   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0828 16:52:53.040803   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0828 16:52:53.040824   18249 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0828 16:52:53.043499   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0828 16:52:53.043517   18249 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0828 16:52:53.059538   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0828 16:52:53.066983   18249 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0828 16:52:53.067015   18249 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0828 16:52:53.136171   18249 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0828 16:52:53.136204   18249 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0828 16:52:53.144640   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 16:52:53.187366   18249 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:53.187394   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0828 16:52:53.212893   18249 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0828 16:52:53.212913   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0828 16:52:53.235809   18249 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0828 16:52:53.235832   18249 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0828 16:52:53.288679   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0828 16:52:53.288698   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0828 16:52:53.385998   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:52:53.397529   18249 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0828 16:52:53.397559   18249 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0828 16:52:53.399651   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0828 16:52:53.466548   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0828 16:52:53.466578   18249 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0828 16:52:53.581666   18249 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 16:52:53.581691   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0828 16:52:53.691064   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0828 16:52:53.691083   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0828 16:52:53.853240   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 16:52:53.941644   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0828 16:52:53.941669   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0828 16:52:54.272844   18249 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 16:52:54.272880   18249 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0828 16:52:54.495971   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 16:52:54.756169   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.298876818s)
	I0828 16:52:54.756225   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:54.756239   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:54.756244   18249 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.278834015s)
	I0828 16:52:54.756268   18249 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0828 16:52:54.756332   18249 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.279127216s)
	I0828 16:52:54.756551   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:54.756572   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:54.756589   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:54.756597   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:54.757015   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:54.757050   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:54.757059   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:54.757383   18249 node_ready.go:35] waiting up to 6m0s for node "addons-990097" to be "Ready" ...
	I0828 16:52:54.786124   18249 node_ready.go:49] node "addons-990097" has status "Ready":"True"
	I0828 16:52:54.786149   18249 node_ready.go:38] duration metric: took 28.747442ms for node "addons-990097" to be "Ready" ...
	I0828 16:52:54.786161   18249 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 16:52:54.827906   18249 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-8gjc6" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:55.293839   18249 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-990097" context rescaled to 1 replicas
	I0828 16:52:55.917518   18249 pod_ready.go:93] pod "coredns-6f6b679f8f-8gjc6" in "kube-system" namespace has status "Ready":"True"
	I0828 16:52:55.917551   18249 pod_ready.go:82] duration metric: took 1.089601559s for pod "coredns-6f6b679f8f-8gjc6" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:55.917564   18249 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace to be "Ready" ...
	I0828 16:52:57.075627   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.489775878s)
	I0828 16:52:57.075691   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:57.075706   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:57.075965   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:57.075988   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:57.075998   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:52:57.076007   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:52:57.077276   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:52:57.077308   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:52:57.077327   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:52:57.979995   18249 pod_ready.go:103] pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status "Ready":"False"
	I0828 16:52:59.035882   18249 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0828 16:52:59.035917   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:59.039427   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.039927   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:59.039958   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.040104   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:59.040296   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:59.040538   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:59.040737   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:52:59.280183   18249 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0828 16:52:59.327255   18249 addons.go:234] Setting addon gcp-auth=true in "addons-990097"
	I0828 16:52:59.327310   18249 host.go:66] Checking if "addons-990097" exists ...
	I0828 16:52:59.327726   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:59.327759   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:59.342823   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38311
	I0828 16:52:59.343340   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:59.343791   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:59.343813   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:59.344064   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:59.344682   18249 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 16:52:59.344737   18249 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 16:52:59.360102   18249 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34741
	I0828 16:52:59.360990   18249 main.go:141] libmachine: () Calling .GetVersion
	I0828 16:52:59.361500   18249 main.go:141] libmachine: Using API Version  1
	I0828 16:52:59.361519   18249 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 16:52:59.361841   18249 main.go:141] libmachine: () Calling .GetMachineName
	I0828 16:52:59.362016   18249 main.go:141] libmachine: (addons-990097) Calling .GetState
	I0828 16:52:59.363643   18249 main.go:141] libmachine: (addons-990097) Calling .DriverName
	I0828 16:52:59.363866   18249 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0828 16:52:59.363888   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHHostname
	I0828 16:52:59.366987   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.367482   18249 main.go:141] libmachine: (addons-990097) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:9e:33", ip: ""} in network mk-addons-990097: {Iface:virbr1 ExpiryTime:2024-08-28 17:52:18 +0000 UTC Type:0 Mac:52:54:00:36:9e:33 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:addons-990097 Clientid:01:52:54:00:36:9e:33}
	I0828 16:52:59.367512   18249 main.go:141] libmachine: (addons-990097) DBG | domain addons-990097 has defined IP address 192.168.39.195 and MAC address 52:54:00:36:9e:33 in network mk-addons-990097
	I0828 16:52:59.367772   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHPort
	I0828 16:52:59.367974   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHKeyPath
	I0828 16:52:59.368154   18249 main.go:141] libmachine: (addons-990097) Calling .GetSSHUsername
	I0828 16:52:59.368303   18249 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/addons-990097/id_rsa Username:docker}
	I0828 16:53:00.143087   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.524208814s)
	I0828 16:53:00.143133   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143143   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143179   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.459455766s)
	I0828 16:53:00.143218   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.358630374s)
	I0828 16:53:00.143225   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143234   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143237   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143245   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143279   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.316631996s)
	I0828 16:53:00.143308   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143320   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143325   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.275851965s)
	I0828 16:53:00.143341   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143349   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143439   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143454   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143465   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143477   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143588   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143601   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143603   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143610   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143622   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143642   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143669   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143673   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143678   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143680   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143686   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143689   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.143693   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143697   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.143705   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143736   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143743   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.143875   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.143971   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.143990   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.144005   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.144007   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.144055   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.144079   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.144094   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.144037   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.144153   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.144353   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.144059   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.145188   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.145203   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.146141   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.146157   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.146170   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.146183   18249 addons.go:475] Verifying addon registry=true in "addons-990097"
	I0828 16:53:00.146206   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.086636827s)
	I0828 16:53:00.146315   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.001646973s)
	I0828 16:53:00.146337   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146350   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146459   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.760428919s)
	W0828 16:53:00.146488   18249 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 16:53:00.146500   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146507   18249 retry.go:31] will retry after 285.495702ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 16:53:00.146512   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146514   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.746824063s)
	I0828 16:53:00.146540   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.146545   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146550   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.146559   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146560   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146618   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.146697   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.293422003s)
	I0828 16:53:00.146718   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.146857   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147307   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147338   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147345   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147352   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.147359   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147366   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.552552261s)
	I0828 16:53:00.147388   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.147399   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147398   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147422   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147429   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147437   18249 addons.go:475] Verifying addon metrics-server=true in "addons-990097"
	I0828 16:53:00.147458   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147509   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147518   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147526   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.147534   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.147545   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147554   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.147758   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.147784   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.147790   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.148382   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.148394   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.148412   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.148414   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.148424   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.148431   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.148445   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.148453   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.148461   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.148468   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.148778   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.148815   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.148823   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.149053   18249 out.go:177] * Verifying registry addon...
	I0828 16:53:00.149082   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.149108   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.149116   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.149124   18249 addons.go:475] Verifying addon ingress=true in "addons-990097"
	I0828 16:53:00.149773   18249 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-990097 service yakd-dashboard -n yakd-dashboard
	
	I0828 16:53:00.151268   18249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0828 16:53:00.151296   18249 out.go:177] * Verifying ingress addon...
	I0828 16:53:00.153273   18249 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0828 16:53:00.166762   18249 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0828 16:53:00.166788   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:00.182165   18249 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0828 16:53:00.182192   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:00.189119   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.189137   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.189552   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.189574   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	W0828 16:53:00.189671   18249 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0828 16:53:00.192266   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.192288   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.192629   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.192650   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.192654   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:00.432806   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 16:53:00.441373   18249 pod_ready.go:103] pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:00.616225   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.120200896s)
	I0828 16:53:00.616277   18249 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.252381186s)
	I0828 16:53:00.616290   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.616306   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.616613   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.616635   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.616651   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:00.616659   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:00.616960   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:00.616974   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:00.616985   18249 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-990097"
	I0828 16:53:00.618208   18249 out.go:177] * Verifying csi-hostpath-driver addon...
	I0828 16:53:00.618221   18249 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 16:53:00.620074   18249 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0828 16:53:00.620941   18249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0828 16:53:00.621479   18249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0828 16:53:00.621497   18249 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0828 16:53:00.649240   18249 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0828 16:53:00.649265   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:00.666906   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:00.666973   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:00.798819   18249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0828 16:53:00.798846   18249 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0828 16:53:00.965848   18249 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 16:53:00.965868   18249 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0828 16:53:01.096603   18249 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 16:53:01.146627   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:01.246922   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:01.247289   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:01.625375   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:01.727635   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:01.728621   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:01.939675   18249 pod_ready.go:98] pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:53:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.195 HostIPs:[{IP:192.168.39
.195}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-28 16:52:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-28 16:52:55 +0000 UTC,FinishedAt:2024-08-28 16:53:00 +0000 UTC,ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37 Started:0xc0015a66a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000c10060} {Name:kube-api-access-gnbll MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc000c10070}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0828 16:53:01.939706   18249 pod_ready.go:82] duration metric: took 6.022133006s for pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace to be "Ready" ...
	E0828 16:53:01.939721   18249 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-jfqhl" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:53:01 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-28 16:52:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.195 HostIPs:[{IP:192.168.39.195}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-28 16:52:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-28 16:52:55 +0000 UTC,FinishedAt:2024-08-28 16:53:00 +0000 UTC,ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://ea684355151d09481718ea10390bd648315946ebb504e9d96a003b95a3770a37 Started:0xc0015a66a0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc000c10060} {Name:kube-api-access-gnbll MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc000c10070}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0828 16:53:01.939735   18249 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.947681   18249 pod_ready.go:93] pod "etcd-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.947709   18249 pod_ready.go:82] duration metric: took 7.961903ms for pod "etcd-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.947723   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.965179   18249 pod_ready.go:93] pod "kube-apiserver-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.965209   18249 pod_ready.go:82] duration metric: took 17.478027ms for pod "kube-apiserver-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.965223   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.975413   18249 pod_ready.go:93] pod "kube-controller-manager-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.975442   18249 pod_ready.go:82] duration metric: took 10.210377ms for pod "kube-controller-manager-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.975456   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8qj9l" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.989070   18249 pod_ready.go:93] pod "kube-proxy-8qj9l" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:01.989092   18249 pod_ready.go:82] duration metric: took 13.627304ms for pod "kube-proxy-8qj9l" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:01.989102   18249 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:02.126567   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:02.155944   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:02.158684   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:02.322404   18249 pod_ready.go:93] pod "kube-scheduler-addons-990097" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:02.322427   18249 pod_ready.go:82] duration metric: took 333.317872ms for pod "kube-scheduler-addons-990097" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:02.322440   18249 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:02.474322   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.04146744s)
	I0828 16:53:02.474395   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.474415   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.474701   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.474716   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.474743   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:02.474804   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.474818   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.475006   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.475026   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.585160   18249 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.48851671s)
	I0828 16:53:02.585206   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.585217   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.585499   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.585553   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.585584   18249 main.go:141] libmachine: Making call to close driver server
	I0828 16:53:02.585591   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:02.585596   18249 main.go:141] libmachine: (addons-990097) Calling .Close
	I0828 16:53:02.585845   18249 main.go:141] libmachine: Successfully made call to close driver server
	I0828 16:53:02.585864   18249 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 16:53:02.585870   18249 main.go:141] libmachine: (addons-990097) DBG | Closing plugin on server side
	I0828 16:53:02.587678   18249 addons.go:475] Verifying addon gcp-auth=true in "addons-990097"
	I0828 16:53:02.589137   18249 out.go:177] * Verifying gcp-auth addon...
	I0828 16:53:02.590957   18249 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0828 16:53:02.611253   18249 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 16:53:02.611280   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:02.625344   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:02.656451   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:02.659296   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:03.096111   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:03.127568   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:03.156535   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:03.158882   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:03.594789   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:03.625961   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:03.655530   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:03.656632   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:04.100416   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:04.202367   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:04.202567   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:04.202579   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:04.332466   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:04.594922   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:04.625960   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:04.654548   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:04.657398   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:05.095212   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:05.127010   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:05.154957   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:05.157414   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:05.600067   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:05.627331   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:05.655666   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:05.658371   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:06.095702   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:06.125685   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:06.166060   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:06.196174   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:06.595324   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:06.625617   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:06.654792   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:06.657272   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:06.827854   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:07.094934   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:07.126052   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:07.155943   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:07.157205   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:07.843759   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:07.843956   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:07.844210   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:07.845956   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:08.094558   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:08.126496   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:08.156387   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:08.158864   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:08.594938   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:08.625675   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:08.654652   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:08.658021   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:08.829775   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:09.095286   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:09.125697   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:09.156180   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:09.157544   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:09.593920   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:09.626336   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:09.655412   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:09.657265   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:10.095098   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:10.126775   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:10.154380   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:10.156565   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:10.595836   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:10.625685   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:10.654838   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:10.657544   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:11.093858   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:11.125963   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:11.155451   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:11.157776   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:11.329080   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:11.594338   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:11.625913   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:11.655531   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:11.657757   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:12.094680   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:12.125074   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:12.156504   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:12.157527   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:12.594657   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:12.625349   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:12.654353   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:12.656983   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:13.094718   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:13.125151   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:13.154331   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:13.156598   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:13.595126   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:13.626873   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:13.654512   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:13.656740   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:13.828160   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:14.094559   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:14.126019   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:14.155228   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:14.158042   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:14.596006   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:14.626608   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:14.656951   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:14.659254   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:15.094812   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:15.125914   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:15.155459   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:15.157532   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:15.595411   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:15.625118   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:15.654905   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:15.656932   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:15.833089   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:16.095434   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:16.125283   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:16.155066   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:16.156964   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:16.594257   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:16.625899   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:16.655321   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:16.658052   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:17.097404   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:17.124748   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:17.155670   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:17.158403   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:17.594954   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:17.625453   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:17.654592   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:17.656593   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:18.095211   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:18.126118   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:18.155697   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:18.156856   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:18.328637   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:18.595104   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:18.625985   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:18.655062   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:18.657082   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:19.094569   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:19.125822   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:19.155202   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:19.157964   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:19.594797   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:19.625854   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:19.655328   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:19.657943   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:20.095529   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:20.125903   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:20.155547   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:20.157641   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:20.329359   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:20.855221   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:20.858381   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:20.859843   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:20.860540   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:21.094959   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:21.129150   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:21.161797   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:21.162220   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:21.594694   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:21.625635   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:21.655280   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:21.657315   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:22.094660   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:22.125891   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:22.473066   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:22.473715   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:22.476586   18249 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"False"
	I0828 16:53:22.595128   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:22.625652   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:22.654993   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:22.658298   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:23.093886   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:23.126139   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:23.156079   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:23.158250   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:23.594455   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:23.625689   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:23.654673   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:23.657362   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:24.095220   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:24.197203   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:24.197523   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:24.197678   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:24.602569   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:24.625733   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:24.654778   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:24.656915   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:24.829081   18249 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace has status "Ready":"True"
	I0828 16:53:24.829106   18249 pod_ready.go:82] duration metric: took 22.50665926s for pod "nvidia-device-plugin-daemonset-j24tf" in "kube-system" namespace to be "Ready" ...
	I0828 16:53:24.829114   18249 pod_ready.go:39] duration metric: took 30.042940712s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 16:53:24.829128   18249 api_server.go:52] waiting for apiserver process to appear ...
	I0828 16:53:24.829180   18249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 16:53:24.846345   18249 api_server.go:72] duration metric: took 32.982988344s to wait for apiserver process to appear ...
	I0828 16:53:24.846376   18249 api_server.go:88] waiting for apiserver healthz status ...
	I0828 16:53:24.846397   18249 api_server.go:253] Checking apiserver healthz at https://192.168.39.195:8443/healthz ...
	I0828 16:53:24.852123   18249 api_server.go:279] https://192.168.39.195:8443/healthz returned 200:
	ok
	I0828 16:53:24.853689   18249 api_server.go:141] control plane version: v1.31.0
	I0828 16:53:24.853713   18249 api_server.go:131] duration metric: took 7.33084ms to wait for apiserver health ...
	I0828 16:53:24.853721   18249 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 16:53:24.862271   18249 system_pods.go:59] 18 kube-system pods found
	I0828 16:53:24.862300   18249 system_pods.go:61] "coredns-6f6b679f8f-8gjc6" [2d62cafa-b292-4c9e-bd8c-b7cc0523f58d] Running
	I0828 16:53:24.862310   18249 system_pods.go:61] "csi-hostpath-attacher-0" [f3ce9e2b-eab0-43a4-a31d-ce0831b5f168] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 16:53:24.862319   18249 system_pods.go:61] "csi-hostpath-resizer-0" [10b5d1e7-194f-42db-8780-63891a0a8ce0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 16:53:24.862329   18249 system_pods.go:61] "csi-hostpathplugin-mm9lp" [011d90e2-d937-44ec-9158-ea2c1f17b104] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 16:53:24.862334   18249 system_pods.go:61] "etcd-addons-990097" [fe186cf5-5965-4644-bc89-139f3599c0a7] Running
	I0828 16:53:24.862340   18249 system_pods.go:61] "kube-apiserver-addons-990097" [aeab6d72-59c7-47c8-acde-ebe584ab2c71] Running
	I0828 16:53:24.862346   18249 system_pods.go:61] "kube-controller-manager-addons-990097" [b1e65ab0-d778-4964-a2f1-610e4457ec7f] Running
	I0828 16:53:24.862351   18249 system_pods.go:61] "kube-ingress-dns-minikube" [3020f9b2-3535-4950-b84f-5387dcc8f455] Running
	I0828 16:53:24.862357   18249 system_pods.go:61] "kube-proxy-8qj9l" [871ff895-ba0c-47f6-aac2-55e5234d02ac] Running
	I0828 16:53:24.862364   18249 system_pods.go:61] "kube-scheduler-addons-990097" [652d01ae-78cd-4eca-99e1-b0de19bd8b88] Running
	I0828 16:53:24.862376   18249 system_pods.go:61] "metrics-server-84c5f94fbc-s6z6n" [3af617c1-2322-4d0f-af32-35d80eaeaf8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 16:53:24.862382   18249 system_pods.go:61] "nvidia-device-plugin-daemonset-j24tf" [fda32bb5-afc7-4b0f-939f-fe0614025dc2] Running
	I0828 16:53:24.862394   18249 system_pods.go:61] "registry-6fb4cdfc84-95krj" [28ff509c-2b4f-4dbc-ac62-07fa93fce1c0] Running
	I0828 16:53:24.862404   18249 system_pods.go:61] "registry-proxy-ds4qv" [1ab53ee3-0865-49b3-8fd0-7f176587e4d5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 16:53:24.862414   18249 system_pods.go:61] "snapshot-controller-56fcc65765-vzbnc" [0c48e398-eb8d-470d-a253-66ea5ad29759] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.862426   18249 system_pods.go:61] "snapshot-controller-56fcc65765-xbr5f" [f0579b92-dea0-4457-9375-d36a3227a888] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.862432   18249 system_pods.go:61] "storage-provisioner" [21f51c68-9237-4afc-950e-961d7a9d6cf2] Running
	I0828 16:53:24.862438   18249 system_pods.go:61] "tiller-deploy-b48cc5f79-wr7ks" [92061fbc-b8a2-4b6f-9ffa-c0ac60e817ab] Running
	I0828 16:53:24.862447   18249 system_pods.go:74] duration metric: took 8.718746ms to wait for pod list to return data ...
	I0828 16:53:24.862458   18249 default_sa.go:34] waiting for default service account to be created ...
	I0828 16:53:24.864930   18249 default_sa.go:45] found service account: "default"
	I0828 16:53:24.864948   18249 default_sa.go:55] duration metric: took 2.483987ms for default service account to be created ...
	I0828 16:53:24.864954   18249 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 16:53:24.873151   18249 system_pods.go:86] 18 kube-system pods found
	I0828 16:53:24.873179   18249 system_pods.go:89] "coredns-6f6b679f8f-8gjc6" [2d62cafa-b292-4c9e-bd8c-b7cc0523f58d] Running
	I0828 16:53:24.873192   18249 system_pods.go:89] "csi-hostpath-attacher-0" [f3ce9e2b-eab0-43a4-a31d-ce0831b5f168] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 16:53:24.873200   18249 system_pods.go:89] "csi-hostpath-resizer-0" [10b5d1e7-194f-42db-8780-63891a0a8ce0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 16:53:24.873209   18249 system_pods.go:89] "csi-hostpathplugin-mm9lp" [011d90e2-d937-44ec-9158-ea2c1f17b104] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 16:53:24.873217   18249 system_pods.go:89] "etcd-addons-990097" [fe186cf5-5965-4644-bc89-139f3599c0a7] Running
	I0828 16:53:24.873223   18249 system_pods.go:89] "kube-apiserver-addons-990097" [aeab6d72-59c7-47c8-acde-ebe584ab2c71] Running
	I0828 16:53:24.873230   18249 system_pods.go:89] "kube-controller-manager-addons-990097" [b1e65ab0-d778-4964-a2f1-610e4457ec7f] Running
	I0828 16:53:24.873239   18249 system_pods.go:89] "kube-ingress-dns-minikube" [3020f9b2-3535-4950-b84f-5387dcc8f455] Running
	I0828 16:53:24.873246   18249 system_pods.go:89] "kube-proxy-8qj9l" [871ff895-ba0c-47f6-aac2-55e5234d02ac] Running
	I0828 16:53:24.873252   18249 system_pods.go:89] "kube-scheduler-addons-990097" [652d01ae-78cd-4eca-99e1-b0de19bd8b88] Running
	I0828 16:53:24.873261   18249 system_pods.go:89] "metrics-server-84c5f94fbc-s6z6n" [3af617c1-2322-4d0f-af32-35d80eaeaf8c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 16:53:24.873267   18249 system_pods.go:89] "nvidia-device-plugin-daemonset-j24tf" [fda32bb5-afc7-4b0f-939f-fe0614025dc2] Running
	I0828 16:53:24.873275   18249 system_pods.go:89] "registry-6fb4cdfc84-95krj" [28ff509c-2b4f-4dbc-ac62-07fa93fce1c0] Running
	I0828 16:53:24.873283   18249 system_pods.go:89] "registry-proxy-ds4qv" [1ab53ee3-0865-49b3-8fd0-7f176587e4d5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 16:53:24.873293   18249 system_pods.go:89] "snapshot-controller-56fcc65765-vzbnc" [0c48e398-eb8d-470d-a253-66ea5ad29759] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.873305   18249 system_pods.go:89] "snapshot-controller-56fcc65765-xbr5f" [f0579b92-dea0-4457-9375-d36a3227a888] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 16:53:24.873311   18249 system_pods.go:89] "storage-provisioner" [21f51c68-9237-4afc-950e-961d7a9d6cf2] Running
	I0828 16:53:24.873319   18249 system_pods.go:89] "tiller-deploy-b48cc5f79-wr7ks" [92061fbc-b8a2-4b6f-9ffa-c0ac60e817ab] Running
	I0828 16:53:24.873330   18249 system_pods.go:126] duration metric: took 8.36895ms to wait for k8s-apps to be running ...
	I0828 16:53:24.873342   18249 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 16:53:24.873397   18249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 16:53:24.891586   18249 system_svc.go:56] duration metric: took 18.235397ms WaitForService to wait for kubelet
	I0828 16:53:24.891614   18249 kubeadm.go:582] duration metric: took 33.028263807s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 16:53:24.891635   18249 node_conditions.go:102] verifying NodePressure condition ...
	I0828 16:53:24.895227   18249 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 16:53:24.895250   18249 node_conditions.go:123] node cpu capacity is 2
	I0828 16:53:24.895261   18249 node_conditions.go:105] duration metric: took 3.620897ms to run NodePressure ...
	I0828 16:53:24.895272   18249 start.go:241] waiting for startup goroutines ...
	I0828 16:53:25.094459   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:25.125633   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:25.155792   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:25.157753   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:25.595906   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:25.625747   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:25.655075   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:25.658011   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:26.094834   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:26.129755   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:26.155136   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:26.157330   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:26.593981   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:26.625973   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:26.664009   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:26.664214   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:27.095448   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:27.125667   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:27.154619   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:27.157410   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:27.595673   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:27.625374   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:27.655905   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:27.657898   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:28.094619   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:28.128498   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:28.154730   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:28.156969   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:28.595931   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:28.625670   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:28.655499   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:28.659580   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:29.094542   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:29.125191   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:29.154692   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:29.156836   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:29.594830   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:29.625397   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:29.655016   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:29.658369   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:30.095041   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:30.125951   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:30.197156   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:30.197430   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:30.593884   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:30.626012   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:30.655288   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:30.658497   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:31.094267   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:31.126053   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:31.155845   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:31.157620   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:31.595111   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:31.625862   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:31.659323   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:31.659393   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:32.095279   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:32.125599   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:32.199254   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:32.199409   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:32.594421   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:32.625606   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:32.655429   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:32.657475   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:33.094915   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:33.125310   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:33.154609   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:33.156659   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:33.594492   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:33.625457   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:33.654434   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:33.656859   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:34.094787   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:34.126012   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:34.155559   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:34.158068   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:34.606896   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:34.625733   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:34.655451   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:34.658409   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:35.094387   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:35.125741   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:35.155049   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:35.156962   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:35.595142   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:35.626314   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:35.656424   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:35.658188   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:36.094587   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:36.125299   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:36.157566   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:36.162381   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:36.594757   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:36.625338   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:36.654928   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 16:53:36.657667   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:37.095534   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:37.125174   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:37.154440   18249 kapi.go:107] duration metric: took 37.003171679s to wait for kubernetes.io/minikube-addons=registry ...
	I0828 16:53:37.156447   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:37.594798   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:37.625235   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:37.656908   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:38.095661   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:38.126092   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:38.158261   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:38.595348   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:38.625091   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:38.657913   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:39.094636   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:39.126184   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:39.157665   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:39.594133   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:39.625606   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:39.658035   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:40.095449   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:40.125725   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:40.157599   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:40.594861   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:40.625830   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:40.657531   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:41.095211   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:41.124902   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:41.158798   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:41.594588   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:41.625002   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:41.657786   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:42.095776   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:42.127039   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:42.158485   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:42.645960   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:42.647890   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:42.657722   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:43.095058   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:43.127772   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:43.157380   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:43.595802   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:43.626208   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:43.659191   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:44.095784   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:44.125689   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:44.157160   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:44.594967   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:44.625614   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:44.657657   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:45.098165   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:45.125532   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:45.157027   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:45.595371   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:45.626505   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:45.658717   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:46.094137   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:46.125930   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:46.159054   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:46.597552   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:46.625716   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:46.657534   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:47.095137   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:47.125905   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:47.158224   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:47.636222   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:47.637581   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:47.657044   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:48.094826   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:48.125355   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:48.157656   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:48.594813   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:48.631137   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:48.657624   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:49.095053   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:49.128446   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:49.157355   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:49.595223   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:49.626255   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:49.658186   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:50.095856   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:50.127379   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:50.158702   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:50.595643   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:50.698127   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:50.698171   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:51.094801   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:51.125567   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:51.157384   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:51.595613   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:51.627271   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:51.657145   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:52.101226   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:52.125053   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:52.157436   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:52.593985   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:52.625898   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:52.658285   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:53.095068   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:53.126152   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:53.157104   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:53.594124   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:53.626149   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:53.657735   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:54.099081   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:54.126193   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:54.157152   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:54.595009   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:54.626412   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:54.720927   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:55.094671   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:55.125251   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:55.156958   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:55.596323   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:55.624970   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:55.657746   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:56.094441   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:56.125622   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:56.156601   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:56.595765   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:56.630056   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:56.698961   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:57.094616   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:57.125818   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:57.157863   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:57.594274   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:57.624777   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:57.657816   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:58.096341   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:58.126916   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:58.158947   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:58.595441   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:58.625428   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:58.657100   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:59.095929   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:59.125671   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:59.157343   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:53:59.594697   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:53:59.625751   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:53:59.657338   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:00.095059   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:00.125731   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:00.157953   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:00.595257   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:00.627464   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:00.657563   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:01.094667   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:01.125904   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:01.157762   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:01.594499   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:01.624717   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:01.657505   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:02.094567   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:02.125907   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:02.196935   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:02.595038   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:02.625765   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:02.696647   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:03.094272   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:03.125427   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:03.157639   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:03.594871   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:03.625673   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:03.657841   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:04.094887   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:04.126789   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:04.157551   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:04.595035   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:04.627362   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:04.658298   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:05.095367   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:05.197028   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:05.197341   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:05.594590   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:05.625380   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:05.657085   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:06.095202   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:06.126191   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:06.156969   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:06.596094   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:06.625814   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:06.658641   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:07.100240   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:07.131987   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:07.158146   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:07.595588   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:07.625705   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:07.657218   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:08.141202   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:08.141936   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:08.170688   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:08.595335   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:08.625506   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:08.657914   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:09.097081   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:09.126472   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:09.157818   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:09.595778   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:09.625507   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:09.658020   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:10.095683   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:10.125569   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:10.157674   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:10.595427   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:10.626371   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:10.657765   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:11.094606   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:11.130408   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:11.158323   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:11.595040   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:11.626209   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:11.658014   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:12.095395   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:12.125926   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:12.157848   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:12.594680   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:12.625860   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:12.657412   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:13.094853   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:13.196216   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:13.196765   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:13.600021   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:13.626826   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:13.657927   18249 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 16:54:14.095522   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:14.125684   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:14.157491   18249 kapi.go:107] duration metric: took 1m14.004214208s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0828 16:54:14.594716   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:14.625548   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:15.094682   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:15.125350   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:15.596546   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:15.625572   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:16.094125   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:16.125975   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:16.594260   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:16.625018   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:17.094891   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:17.125763   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:17.594205   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:17.626413   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:18.094280   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:18.125555   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:18.598192   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 16:54:18.627321   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:19.095258   18249 kapi.go:107] duration metric: took 1m16.504298837s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0828 16:54:19.097233   18249 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-990097 cluster.
	I0828 16:54:19.098992   18249 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0828 16:54:19.100337   18249 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0828 16:54:19.132159   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:19.626709   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:20.125928   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:20.626509   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:21.126771   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:21.625546   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:22.126321   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:22.626308   18249 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 16:54:23.128207   18249 kapi.go:107] duration metric: took 1m22.507265973s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0828 16:54:23.129806   18249 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, inspektor-gadget, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0828 16:54:23.131012   18249 addons.go:510] duration metric: took 1m31.267643413s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin ingress-dns metrics-server inspektor-gadget helm-tiller yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0828 16:54:23.131051   18249 start.go:246] waiting for cluster config update ...
	I0828 16:54:23.131069   18249 start.go:255] writing updated cluster config ...
	I0828 16:54:23.131315   18249 ssh_runner.go:195] Run: rm -f paused
	I0828 16:54:23.182950   18249 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 16:54:23.184758   18249 out.go:177] * Done! kubectl is now configured to use "addons-990097" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.149835378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864906149797760,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2757f040-1dbd-4e36-9434-91664ddd74d7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.150703850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e1d7354-6f51-47cb-a0e9-833b9b6c0913 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.150856253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e1d7354-6f51-47cb-a0e9-833b9b6c0913 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.151273684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:073ea5ad9ed0cd6351fb9bc27bf3fc673216f1716f19924a1166408bfa7e913f,PodSandboxId:f467ab0b0144ada3d83a567ade3c603dd3d817d2f2e0469e5ef5467fde03d5f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724864718604796484,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4ksfc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724864032785543121,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022
742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e486
87f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57
040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f14602
06f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e1d7354-6f51-47cb-a0e9-833b9b6c0913 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.193413452Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3bc5c3e0-a879-491d-9c0b-7db99193cca0 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.193531843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bc5c3e0-a879-491d-9c0b-7db99193cca0 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.195172035Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34e87a87-5be4-47d2-a8b5-a55af082277c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.196634620Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864906196605745,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34e87a87-5be4-47d2-a8b5-a55af082277c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.197677967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e33349df-3a6b-424f-a47a-f2573d27e33b name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.197753563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e33349df-3a6b-424f-a47a-f2573d27e33b name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.198106934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:073ea5ad9ed0cd6351fb9bc27bf3fc673216f1716f19924a1166408bfa7e913f,PodSandboxId:f467ab0b0144ada3d83a567ade3c603dd3d817d2f2e0469e5ef5467fde03d5f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724864718604796484,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4ksfc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724864032785543121,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022
742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e486
87f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57
040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f14602
06f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e33349df-3a6b-424f-a47a-f2573d27e33b name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.235220731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d521e1ab-bee3-4e91-8f4a-b954dd326152 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.235342428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d521e1ab-bee3-4e91-8f4a-b954dd326152 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.236748162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a58bed78-f5f9-4204-bab3-a9eca8713570 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.237901663Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864906237874414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a58bed78-f5f9-4204-bab3-a9eca8713570 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.238357096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1479e67-c918-43ba-9548-7177d0f08dee name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.238456893Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1479e67-c918-43ba-9548-7177d0f08dee name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.238741840Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:073ea5ad9ed0cd6351fb9bc27bf3fc673216f1716f19924a1166408bfa7e913f,PodSandboxId:f467ab0b0144ada3d83a567ade3c603dd3d817d2f2e0469e5ef5467fde03d5f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724864718604796484,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4ksfc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724864032785543121,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022
742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e486
87f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57
040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f14602
06f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1479e67-c918-43ba-9548-7177d0f08dee name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.273283091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=118e7432-3987-479d-8b31-a50a6a439bfd name=/runtime.v1.RuntimeService/Version
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.273414249Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=118e7432-3987-479d-8b31-a50a6a439bfd name=/runtime.v1.RuntimeService/Version
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.274497593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a99cf0e-52d6-4098-a707-03d1222a93e1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.276025198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864906275993925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a99cf0e-52d6-4098-a707-03d1222a93e1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.276649251Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b7893f4-4f12-4e00-b1d2-31b3a1bc874b name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.276719696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b7893f4-4f12-4e00-b1d2-31b3a1bc874b name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:08:26 addons-990097 crio[658]: time="2024-08-28 17:08:26.277107742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:073ea5ad9ed0cd6351fb9bc27bf3fc673216f1716f19924a1166408bfa7e913f,PodSandboxId:f467ab0b0144ada3d83a567ade3c603dd3d817d2f2e0469e5ef5467fde03d5f5,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1724864718604796484,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-4ksfc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a6eb38f8-d74b-4b90-ae87-ecba1b1c9d64,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d927dbf90a83ff4dd595ad49f620bb9ae410cb1742783cb4b8a5a38487fdf23,PodSandboxId:4d68d074fbaf61fa1d510d05ac858301daf571757d824ad651ea754f8c4bc2d3,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a,State:CONTAINER_RUNNING,CreatedAt:1724864577455247287,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 001cf7f5-0df7-4a5a-aad0-71b14bcde5db,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4,PodSandboxId:736ed095eb5c9fc3cbd6da7c87df16774a8c52e08af8e815f2bce8adf88602d3,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1724864058626436197,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-hhsh7,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: 06f15267-892e-437d-b6b9-e81e3908ced8,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bd52d706a171d5204ec8767c9491ddc42b98b3fce07edb8ce83706ff3a22ad3,PodSandboxId:7bde8dc0560903b1f3edaec22600aabfb39e0f76133de13f4d1fe655951e79e2,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1724864032785543121,Labels:map[string]string{io.kubernetes.container.name: l
ocal-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-fs8wf,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 18e65b54-2d84-4fcc-ab60-dee4237c6e47,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9760e94848e1aac98ba597828cc8176c3625acd06bdadee5683fc919a7e19367,PodSandboxId:c79990266e87d9f2d4de4cb921c2a17296389784c8856cfa17b7f9560546900f,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1724864022
742221719,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-s6z6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3af617c1-2322-4d0f-af32-35d80eaeaf8c,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6,PodSandboxId:c61ef1e53e51b23e8acca9c59d9a41833a11c57075a01478a2502007bb0e9e55,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db300
2f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724863978535674991,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21f51c68-9237-4afc-950e-961d7a9d6cf2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b,PodSandboxId:50cdf2ec929910da8ac781de90b1186a7134fbd740821ec97a9c368b3b233720,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e486
87f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724863974528434184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-8gjc6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d62cafa-b292-4c9e-bd8c-b7cc0523f58d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691,PodSandboxId:37e7fe6fa66b53536164147f67d8d808d536970a8566fb5585be164ec4ef06a5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&Im
ageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724863971729600860,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qj9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 871ff895-ba0c-47f6-aac2-55e5234d02ac,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093,PodSandboxId:f5c9bab6fb29310970dfba4458cd4c01b0edc52903f40c0221ff6c62ee244271,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57
040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724863961284761835,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e432527c42f1abf6f654ab1835aff36b,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83,PodSandboxId:fcc77a679af87198c7523acbfb8906fadf0d17daab990df6b203b08321de4cdd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f14602
06f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724863961278463970,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5162bea0d09e827cbe540638954e254d,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880,PodSandboxId:2ff31a06164b22e5a309acfe858451c0b788bf33e06bf6449e6acfe444980ebc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e0698
33752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724863961208583914,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e33f9a1284e57bf2224e7983509eefb,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca,PodSandboxId:3e4bbd88d63344271dff4865095d5bd03f37d0e147dc6efc9b0ed9e5f5e2f8ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80d
a792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724863961164906432,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-990097,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c69b85abdbc84170d94c969b4a0f426,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b7893f4-4f12-4e00-b1d2-31b3a1bc874b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	073ea5ad9ed0c       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   3 minutes ago       Running             hello-world-app           0                   f467ab0b0144a       hello-world-app-55bf9c44b4-4ksfc
	7d927dbf90a83       docker.io/library/nginx@sha256:0c57fe90551cfd8b7d4d05763c5018607b296cb01f7e0ff44b7d047353ed8cc0                         5 minutes ago       Running             nginx                     0                   4d68d074fbaf6       nginx
	c026a720fa74e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   736ed095eb5c9       gcp-auth-89d5ffd79-hhsh7
	5bd52d706a171       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        14 minutes ago      Running             local-path-provisioner    0                   7bde8dc056090       local-path-provisioner-86d989889c-fs8wf
	9760e94848e1a       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago      Running             metrics-server            0                   c79990266e87d       metrics-server-84c5f94fbc-s6z6n
	092298cdfb616       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   c61ef1e53e51b       storage-provisioner
	04f71727199d8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        15 minutes ago      Running             coredns                   0                   50cdf2ec92991       coredns-6f6b679f8f-8gjc6
	f41de974958b8       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        15 minutes ago      Running             kube-proxy                0                   37e7fe6fa66b5       kube-proxy-8qj9l
	7c59931085105       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        15 minutes ago      Running             kube-scheduler            0                   f5c9bab6fb293       kube-scheduler-addons-990097
	e7f9f99f0e0ad       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        15 minutes ago      Running             kube-apiserver            0                   fcc77a679af87       kube-apiserver-addons-990097
	b8d25fadc3e3b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        15 minutes ago      Running             kube-controller-manager   0                   2ff31a06164b2       kube-controller-manager-addons-990097
	f5afe4e2c7c30       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   3e4bbd88d6334       etcd-addons-990097
	
	
	==> coredns [04f71727199d8f97f5905da2cdcacac6f9d2a72dd6a9a31d0002ead115ba850b] <==
	[INFO] 127.0.0.1:43936 - 36274 "HINFO IN 1185575041321747915.1095525017323975341. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010495017s
	[INFO] 10.244.0.7:36545 - 56598 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00046725s
	[INFO] 10.244.0.7:36545 - 34323 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122837s
	[INFO] 10.244.0.7:40812 - 34220 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000159278s
	[INFO] 10.244.0.7:40812 - 30894 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094014s
	[INFO] 10.244.0.7:51634 - 55543 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000178288s
	[INFO] 10.244.0.7:51634 - 16073 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087886s
	[INFO] 10.244.0.7:58682 - 5261 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000220947s
	[INFO] 10.244.0.7:58682 - 20879 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00015173s
	[INFO] 10.244.0.7:34574 - 59863 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000142024s
	[INFO] 10.244.0.7:34574 - 27092 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000153861s
	[INFO] 10.244.0.7:47702 - 54016 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000074647s
	[INFO] 10.244.0.7:47702 - 51998 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000067543s
	[INFO] 10.244.0.7:41963 - 59886 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068451s
	[INFO] 10.244.0.7:41963 - 56300 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027312s
	[INFO] 10.244.0.7:43940 - 2554 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010857s
	[INFO] 10.244.0.7:43940 - 48379 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000072487s
	[INFO] 10.244.0.22:56224 - 47882 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000420049s
	[INFO] 10.244.0.22:50407 - 64319 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000234351s
	[INFO] 10.244.0.22:57980 - 2289 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127832s
	[INFO] 10.244.0.22:51961 - 33598 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000075597s
	[INFO] 10.244.0.22:37745 - 53825 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120959s
	[INFO] 10.244.0.22:46423 - 60876 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059568s
	[INFO] 10.244.0.22:56705 - 36016 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000732573s
	[INFO] 10.244.0.22:55859 - 40874 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001065258s
	
	
	==> describe nodes <==
	Name:               addons-990097
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-990097
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=addons-990097
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T16_52_47_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-990097
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 16:52:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-990097
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:08:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:05:51 +0000   Wed, 28 Aug 2024 16:52:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:05:51 +0000   Wed, 28 Aug 2024 16:52:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:05:51 +0000   Wed, 28 Aug 2024 16:52:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:05:51 +0000   Wed, 28 Aug 2024 16:52:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    addons-990097
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 6fc997bea7fd463bb1b99884632d7f13
	  System UUID:                6fc997be-a7fd-463b-b1b9-9884632d7f13
	  Boot ID:                    c2f58d05-673b-4f75-ad50-a0fe6c092504
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-4ksfc           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  gcp-auth                    gcp-auth-89d5ffd79-hhsh7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-6f6b679f8f-8gjc6                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-990097                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-990097               250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-990097      200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-8qj9l                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-990097               100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-s6z6n            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         15m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  local-path-storage          local-path-provisioner-86d989889c-fs8wf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node addons-990097 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node addons-990097 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node addons-990097 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m   kubelet          Node addons-990097 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node addons-990097 event: Registered Node addons-990097 in Controller
	
	
	==> dmesg <==
	[ +27.839945] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.867762] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.427541] kauditd_printk_skb: 12 callbacks suppressed
	[Aug28 16:54] kauditd_printk_skb: 26 callbacks suppressed
	[  +5.057528] kauditd_printk_skb: 97 callbacks suppressed
	[ +11.649408] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.118173] kauditd_printk_skb: 45 callbacks suppressed
	[ +23.135978] kauditd_printk_skb: 6 callbacks suppressed
	[Aug28 16:55] kauditd_printk_skb: 30 callbacks suppressed
	[Aug28 16:56] kauditd_printk_skb: 28 callbacks suppressed
	[Aug28 16:59] kauditd_printk_skb: 28 callbacks suppressed
	[Aug28 17:02] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.029063] kauditd_printk_skb: 7 callbacks suppressed
	[ +13.016574] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.965420] kauditd_printk_skb: 11 callbacks suppressed
	[Aug28 17:03] kauditd_printk_skb: 10 callbacks suppressed
	[ +15.023131] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.275073] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.034230] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.067261] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.756849] kauditd_printk_skb: 19 callbacks suppressed
	[  +6.874514] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.278471] kauditd_printk_skb: 25 callbacks suppressed
	[Aug28 17:05] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.002472] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f5afe4e2c7c301b3874558efb5d1ceca9c33d3fe7a2a02041d782db4f64428ca] <==
	{"level":"warn","ts":"2024-08-28T16:53:34.593913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.743715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-89d5ffd79-hhsh7.17eff2a74e25dc97\" ","response":"range_response_count:1 size:781"}
	{"level":"info","ts":"2024-08-28T16:53:34.593958Z","caller":"traceutil/trace.go:171","msg":"trace[1783725651] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-89d5ffd79-hhsh7.17eff2a74e25dc97; range_end:; response_count:1; response_revision:925; }","duration":"207.796464ms","start":"2024-08-28T16:53:34.386149Z","end":"2024-08-28T16:53:34.593946Z","steps":["trace[1783725651] 'range keys from in-memory index tree'  (duration: 207.618074ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:53:42.558909Z","caller":"traceutil/trace.go:171","msg":"trace[1823174305] transaction","detail":"{read_only:false; response_revision:947; number_of_response:1; }","duration":"287.407068ms","start":"2024-08-28T16:53:42.271483Z","end":"2024-08-28T16:53:42.558890Z","steps":["trace[1823174305] 'process raft request'  (duration: 287.285479ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:53:47.622998Z","caller":"traceutil/trace.go:171","msg":"trace[932674930] transaction","detail":"{read_only:false; response_revision:961; number_of_response:1; }","duration":"104.356711ms","start":"2024-08-28T16:53:47.518628Z","end":"2024-08-28T16:53:47.622985Z","steps":["trace[932674930] 'process raft request'  (duration: 104.239303ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:54:55.302592Z","caller":"traceutil/trace.go:171","msg":"trace[1524241447] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"230.314513ms","start":"2024-08-28T16:54:55.072243Z","end":"2024-08-28T16:54:55.302557Z","steps":["trace[1524241447] 'process raft request'  (duration: 229.745464ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:54:55.303259Z","caller":"traceutil/trace.go:171","msg":"trace[1962350705] linearizableReadLoop","detail":"{readStateIndex:1311; appliedIndex:1310; }","duration":"198.610451ms","start":"2024-08-28T16:54:55.103505Z","end":"2024-08-28T16:54:55.302115Z","steps":["trace[1962350705] 'read index received'  (duration: 198.397171ms)","trace[1962350705] 'applied index is now lower than readState.Index'  (duration: 212.527µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T16:54:55.303540Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.965293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-08-28T16:54:55.303613Z","caller":"traceutil/trace.go:171","msg":"trace[178375171] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1269; }","duration":"200.115796ms","start":"2024-08-28T16:54:55.103483Z","end":"2024-08-28T16:54:55.303599Z","steps":["trace[178375171] 'agreement among raft nodes before linearized reading'  (duration: 199.893413ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:02:42.414396Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1528}
	{"level":"info","ts":"2024-08-28T17:02:42.450275Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1528,"took":"35.104221ms","hash":1413996905,"current-db-size-bytes":6000640,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3461120,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-08-28T17:02:42.450436Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1413996905,"revision":1528,"compact-revision":-1}
	{"level":"info","ts":"2024-08-28T17:02:53.325278Z","caller":"traceutil/trace.go:171","msg":"trace[2108371229] linearizableReadLoop","detail":"{readStateIndex:2211; appliedIndex:2210; }","duration":"459.294326ms","start":"2024-08-28T17:02:52.865949Z","end":"2024-08-28T17:02:53.325243Z","steps":["trace[2108371229] 'read index received'  (duration: 459.150699ms)","trace[2108371229] 'applied index is now lower than readState.Index'  (duration: 142.943µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-28T17:02:53.325511Z","caller":"traceutil/trace.go:171","msg":"trace[424925906] transaction","detail":"{read_only:false; response_revision:2063; number_of_response:1; }","duration":"525.181818ms","start":"2024-08-28T17:02:52.800315Z","end":"2024-08-28T17:02:53.325497Z","steps":["trace[424925906] 'process raft request'  (duration: 524.829974ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:02:53.325765Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.733213ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-08-28T17:02:53.325825Z","caller":"traceutil/trace.go:171","msg":"trace[162657861] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:2063; }","duration":"368.829874ms","start":"2024-08-28T17:02:52.956983Z","end":"2024-08-28T17:02:53.325812Z","steps":["trace[162657861] 'agreement among raft nodes before linearized reading'  (duration: 368.661415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:02:53.325863Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T17:02:52.956950Z","time spent":"368.907244ms","remote":"127.0.0.1:52368","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":577,"request content":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" "}
	{"level":"warn","ts":"2024-08-28T17:02:53.326000Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"460.0423ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T17:02:53.326031Z","caller":"traceutil/trace.go:171","msg":"trace[1263793325] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2063; }","duration":"460.081559ms","start":"2024-08-28T17:02:52.865944Z","end":"2024-08-28T17:02:53.326026Z","steps":["trace[1263793325] 'agreement among raft nodes before linearized reading'  (duration: 460.033068ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:02:53.327962Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T17:02:52.800270Z","time spent":"525.3055ms","remote":"127.0.0.1:52368","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":485,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2016 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"warn","ts":"2024-08-28T17:03:45.831964Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T17:03:45.434224Z","time spent":"397.729251ms","remote":"127.0.0.1:52116","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2024-08-28T17:04:18.231086Z","caller":"traceutil/trace.go:171","msg":"trace[2114474411] transaction","detail":"{read_only:false; response_revision:2520; number_of_response:1; }","duration":"163.39208ms","start":"2024-08-28T17:04:18.067647Z","end":"2024-08-28T17:04:18.231039Z","steps":["trace[2114474411] 'process raft request'  (duration: 163.273021ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:04:28.391969Z","caller":"traceutil/trace.go:171","msg":"trace[768217437] transaction","detail":"{read_only:false; response_revision:2530; number_of_response:1; }","duration":"114.240171ms","start":"2024-08-28T17:04:28.277712Z","end":"2024-08-28T17:04:28.391952Z","steps":["trace[768217437] 'process raft request'  (duration: 114.119686ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:07:42.422416Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1988}
	{"level":"info","ts":"2024-08-28T17:07:42.445438Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1988,"took":"22.178263ms","hash":4268038940,"current-db-size-bytes":6000640,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":4411392,"current-db-size-in-use":"4.4 MB"}
	{"level":"info","ts":"2024-08-28T17:07:42.445503Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4268038940,"revision":1988,"compact-revision":1528}
	
	
	==> gcp-auth [c026a720fa74eba1e8541f2a7b966494d993c5a08b1d4283dc5e78bc846f7ed4] <==
	2024/08/28 16:54:23 Ready to write response ...
	2024/08/28 17:02:37 Ready to marshal response ...
	2024/08/28 17:02:37 Ready to write response ...
	2024/08/28 17:02:46 Ready to marshal response ...
	2024/08/28 17:02:46 Ready to write response ...
	2024/08/28 17:02:50 Ready to marshal response ...
	2024/08/28 17:02:50 Ready to write response ...
	2024/08/28 17:03:06 Ready to marshal response ...
	2024/08/28 17:03:06 Ready to write response ...
	2024/08/28 17:03:23 Ready to marshal response ...
	2024/08/28 17:03:23 Ready to write response ...
	2024/08/28 17:03:23 Ready to marshal response ...
	2024/08/28 17:03:23 Ready to write response ...
	2024/08/28 17:03:33 Ready to marshal response ...
	2024/08/28 17:03:33 Ready to write response ...
	2024/08/28 17:03:41 Ready to marshal response ...
	2024/08/28 17:03:41 Ready to write response ...
	2024/08/28 17:03:41 Ready to marshal response ...
	2024/08/28 17:03:41 Ready to write response ...
	2024/08/28 17:03:41 Ready to marshal response ...
	2024/08/28 17:03:41 Ready to write response ...
	2024/08/28 17:03:52 Ready to marshal response ...
	2024/08/28 17:03:52 Ready to write response ...
	2024/08/28 17:05:15 Ready to marshal response ...
	2024/08/28 17:05:15 Ready to write response ...
	
	
	==> kernel <==
	 17:08:26 up 16 min,  0 users,  load average: 0.07, 0.29, 0.35
	Linux addons-990097 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e7f9f99f0e0ad0c541d2d8fe2fde30b69516a4e028637b7fbbf26da8e3274d83] <==
	 > logger="UnhandledError"
	E0828 16:54:47.903698       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.64.33:443: connect: connection refused" logger="UnhandledError"
	E0828 16:54:47.909556       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.64.33:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.64.33:443: connect: connection refused" logger="UnhandledError"
	I0828 16:54:47.980457       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0828 17:02:44.290693       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0828 17:02:45.318931       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0828 17:02:50.191896       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0828 17:02:50.441092       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.222.232"}
	I0828 17:03:01.316914       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0828 17:03:22.465921       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.465977       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.486678       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.486850       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.594997       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.595115       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.613013       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.614969       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:03:22.617986       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:03:22.618315       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0828 17:03:23.613355       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0828 17:03:23.619379       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0828 17:03:23.735621       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0828 17:03:41.358241       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.113.183"}
	E0828 17:03:56.081411       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 192.168.39.195:8443->10.244.0.32:49968: read: connection reset by peer
	I0828 17:05:15.856419       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.186.179"}
	
	
	==> kube-controller-manager [b8d25fadc3e3bbfca40858237c2d8fa43d3e17fa2d47e3f4988b85875a5bf880] <==
	W0828 17:06:27.169729       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:06:27.169783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:06:30.321107       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:06:30.321162       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:06:56.045397       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:06:56.045622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:06:56.517938       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:06:56.518071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:07:14.315628       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:07:14.315714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:07:14.768383       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:07:14.768426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:07:26.958621       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:07:26.958679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:07:47.078591       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:07:47.078648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:08:02.243386       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:08:02.243601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:08:09.850465       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:08:09.850551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:08:13.102621       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:08:13.102683       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:08:20.615689       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:08:20.615822       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:08:25.233391       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="11.972µs"
	
	
	==> kube-proxy [f41de974958b8092e46f4943adb90decacc1758b0fef4665344bf9a407664691] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 16:52:52.089415       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 16:52:52.099940       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.195"]
	E0828 16:52:52.099997       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 16:52:52.173377       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 16:52:52.173438       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 16:52:52.173468       1 server_linux.go:169] "Using iptables Proxier"
	I0828 16:52:52.175943       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 16:52:52.176378       1 server.go:483] "Version info" version="v1.31.0"
	I0828 16:52:52.176391       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 16:52:52.177695       1 config.go:197] "Starting service config controller"
	I0828 16:52:52.177716       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 16:52:52.177745       1 config.go:104] "Starting endpoint slice config controller"
	I0828 16:52:52.177750       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 16:52:52.178237       1 config.go:326] "Starting node config controller"
	I0828 16:52:52.178244       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 16:52:52.278337       1 shared_informer.go:320] Caches are synced for node config
	I0828 16:52:52.278370       1 shared_informer.go:320] Caches are synced for service config
	I0828 16:52:52.278391       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7c59931085105172a77d36059b151f5d1d5b6386187f99d750d91aec84b9e093] <==
	W0828 16:52:43.762343       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0828 16:52:43.762378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:43.767505       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 16:52:43.767603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.593944       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0828 16:52:44.593990       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.649260       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 16:52:44.649415       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0828 16:52:44.667387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0828 16:52:44.667479       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.675396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 16:52:44.675487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.740397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0828 16:52:44.740445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.770930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 16:52:44.770991       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.825118       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0828 16:52:44.825170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.869231       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 16:52:44.869366       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.933958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0828 16:52:44.934034       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:52:44.988755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 16:52:44.988802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0828 16:52:47.648648       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 17:07:36 addons-990097 kubelet[1192]: E0828 17:07:36.745244    1192 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864856744590989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:07:46 addons-990097 kubelet[1192]: E0828 17:07:46.419280    1192 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 17:07:46 addons-990097 kubelet[1192]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 17:07:46 addons-990097 kubelet[1192]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 17:07:46 addons-990097 kubelet[1192]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:07:46 addons-990097 kubelet[1192]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:07:46 addons-990097 kubelet[1192]: E0828 17:07:46.748106    1192 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864866747848596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:07:46 addons-990097 kubelet[1192]: E0828 17:07:46.748130    1192 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864866747848596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:07:47 addons-990097 kubelet[1192]: E0828 17:07:47.406613    1192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="27dca925-9e7b-46e8-b9f4-9b11d07e0de2"
	Aug 28 17:07:56 addons-990097 kubelet[1192]: E0828 17:07:56.750014    1192 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864876749739359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:07:56 addons-990097 kubelet[1192]: E0828 17:07:56.750388    1192 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864876749739359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:08:02 addons-990097 kubelet[1192]: E0828 17:08:02.405777    1192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="27dca925-9e7b-46e8-b9f4-9b11d07e0de2"
	Aug 28 17:08:06 addons-990097 kubelet[1192]: E0828 17:08:06.752445    1192 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864886752171293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:08:06 addons-990097 kubelet[1192]: E0828 17:08:06.752466    1192 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864886752171293,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:08:16 addons-990097 kubelet[1192]: E0828 17:08:16.407685    1192 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="27dca925-9e7b-46e8-b9f4-9b11d07e0de2"
	Aug 28 17:08:16 addons-990097 kubelet[1192]: E0828 17:08:16.754419    1192 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864896754140307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:08:16 addons-990097 kubelet[1192]: E0828 17:08:16.754455    1192 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864896754140307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:08:26 addons-990097 kubelet[1192]: E0828 17:08:26.759520    1192 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864906758886891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:08:26 addons-990097 kubelet[1192]: E0828 17:08:26.759574    1192 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724864906758886891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575907,},InodesUsed:&UInt64Value{Value:200,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:08:26 addons-990097 kubelet[1192]: I0828 17:08:26.802779    1192 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3af617c1-2322-4d0f-af32-35d80eaeaf8c-tmp-dir\") pod \"3af617c1-2322-4d0f-af32-35d80eaeaf8c\" (UID: \"3af617c1-2322-4d0f-af32-35d80eaeaf8c\") "
	Aug 28 17:08:26 addons-990097 kubelet[1192]: I0828 17:08:26.802827    1192 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdcx6\" (UniqueName: \"kubernetes.io/projected/3af617c1-2322-4d0f-af32-35d80eaeaf8c-kube-api-access-vdcx6\") pod \"3af617c1-2322-4d0f-af32-35d80eaeaf8c\" (UID: \"3af617c1-2322-4d0f-af32-35d80eaeaf8c\") "
	Aug 28 17:08:26 addons-990097 kubelet[1192]: I0828 17:08:26.803631    1192 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af617c1-2322-4d0f-af32-35d80eaeaf8c-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3af617c1-2322-4d0f-af32-35d80eaeaf8c" (UID: "3af617c1-2322-4d0f-af32-35d80eaeaf8c"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 28 17:08:26 addons-990097 kubelet[1192]: I0828 17:08:26.804722    1192 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af617c1-2322-4d0f-af32-35d80eaeaf8c-kube-api-access-vdcx6" (OuterVolumeSpecName: "kube-api-access-vdcx6") pod "3af617c1-2322-4d0f-af32-35d80eaeaf8c" (UID: "3af617c1-2322-4d0f-af32-35d80eaeaf8c"). InnerVolumeSpecName "kube-api-access-vdcx6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:08:26 addons-990097 kubelet[1192]: I0828 17:08:26.903858    1192 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3af617c1-2322-4d0f-af32-35d80eaeaf8c-tmp-dir\") on node \"addons-990097\" DevicePath \"\""
	Aug 28 17:08:26 addons-990097 kubelet[1192]: I0828 17:08:26.903914    1192 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vdcx6\" (UniqueName: \"kubernetes.io/projected/3af617c1-2322-4d0f-af32-35d80eaeaf8c-kube-api-access-vdcx6\") on node \"addons-990097\" DevicePath \"\""
	
	
	==> storage-provisioner [092298cdfb616d29c9eb726bc3e6f2e73dcd425a57d6ba40676129529c5c28d6] <==
	I0828 16:52:58.911009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 16:52:58.964276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 16:52:59.019593       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 16:52:59.214120       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 16:52:59.226396       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-990097_2298ca45-abf7-4f73-afd1-326d2fb9f78e!
	I0828 16:52:59.227671       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"40ef84ee-3904-40bf-b67a-f3ab38dd9ae4", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-990097_2298ca45-abf7-4f73-afd1-326d2fb9f78e became leader
	I0828 16:52:59.636127       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-990097_2298ca45-abf7-4f73-afd1-326d2fb9f78e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-990097 -n addons-990097
helpers_test.go:261: (dbg) Run:  kubectl --context addons-990097 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox metrics-server-84c5f94fbc-s6z6n
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-990097 describe pod busybox metrics-server-84c5f94fbc-s6z6n
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-990097 describe pod busybox metrics-server-84c5f94fbc-s6z6n: exit status 1 (64.728239ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-990097/192.168.39.195
	Start Time:       Wed, 28 Aug 2024 16:54:23 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.23
	IPs:
	  IP:  10.244.0.23
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-58r55 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-58r55:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/busybox to addons-990097
	  Normal   Pulling    12m (x4 over 14m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 14m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m54s (x43 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "metrics-server-84c5f94fbc-s6z6n" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-990097 describe pod busybox metrics-server-84c5f94fbc-s6z6n: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (361.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 node stop m02 -v=7 --alsologtostderr
E0828 17:18:20.734725   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:18:41.216684   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:19:22.178887   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:19:23.523753   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:19:51.227795   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-240486 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.458345096s)

                                                
                                                
-- stdout --
	* Stopping node "ha-240486-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:18:14.130947   33635 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:18:14.131204   33635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:18:14.131212   33635 out.go:358] Setting ErrFile to fd 2...
	I0828 17:18:14.131216   33635 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:18:14.131388   33635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:18:14.131615   33635 mustload.go:65] Loading cluster: ha-240486
	I0828 17:18:14.131970   33635 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:18:14.131983   33635 stop.go:39] StopHost: ha-240486-m02
	I0828 17:18:14.132328   33635 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:18:14.132371   33635 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:18:14.148923   33635 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36199
	I0828 17:18:14.149381   33635 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:18:14.149913   33635 main.go:141] libmachine: Using API Version  1
	I0828 17:18:14.149932   33635 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:18:14.150249   33635 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:18:14.152459   33635 out.go:177] * Stopping node "ha-240486-m02"  ...
	I0828 17:18:14.153517   33635 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0828 17:18:14.153535   33635 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:18:14.153695   33635 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0828 17:18:14.153722   33635 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:18:14.156387   33635 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:18:14.156802   33635 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:18:14.156826   33635 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:18:14.157087   33635 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:18:14.157249   33635 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:18:14.157398   33635 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:18:14.157540   33635 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	I0828 17:18:14.241830   33635 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0828 17:18:14.296070   33635 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0828 17:18:14.354234   33635 main.go:141] libmachine: Stopping "ha-240486-m02"...
	I0828 17:18:14.354268   33635 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:18:14.355862   33635 main.go:141] libmachine: (ha-240486-m02) Calling .Stop
	I0828 17:18:14.359303   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 0/120
	I0828 17:18:15.360567   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 1/120
	I0828 17:18:16.362094   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 2/120
	I0828 17:18:17.364021   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 3/120
	I0828 17:18:18.366575   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 4/120
	I0828 17:18:19.368620   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 5/120
	I0828 17:18:20.370031   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 6/120
	I0828 17:18:21.371409   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 7/120
	I0828 17:18:22.373335   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 8/120
	I0828 17:18:23.375032   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 9/120
	I0828 17:18:24.377226   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 10/120
	I0828 17:18:25.378605   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 11/120
	I0828 17:18:26.380523   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 12/120
	I0828 17:18:27.381676   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 13/120
	I0828 17:18:28.383110   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 14/120
	I0828 17:18:29.385048   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 15/120
	I0828 17:18:30.386324   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 16/120
	I0828 17:18:31.388446   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 17/120
	I0828 17:18:32.390765   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 18/120
	I0828 17:18:33.392060   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 19/120
	I0828 17:18:34.394330   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 20/120
	I0828 17:18:35.396495   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 21/120
	I0828 17:18:36.397858   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 22/120
	I0828 17:18:37.399044   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 23/120
	I0828 17:18:38.400637   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 24/120
	I0828 17:18:39.402414   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 25/120
	I0828 17:18:40.403678   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 26/120
	I0828 17:18:41.405341   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 27/120
	I0828 17:18:42.406605   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 28/120
	I0828 17:18:43.407861   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 29/120
	I0828 17:18:44.410109   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 30/120
	I0828 17:18:45.411520   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 31/120
	I0828 17:18:46.412884   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 32/120
	I0828 17:18:47.414240   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 33/120
	I0828 17:18:48.415430   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 34/120
	I0828 17:18:49.417276   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 35/120
	I0828 17:18:50.418546   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 36/120
	I0828 17:18:51.420449   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 37/120
	I0828 17:18:52.421865   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 38/120
	I0828 17:18:53.423530   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 39/120
	I0828 17:18:54.425697   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 40/120
	I0828 17:18:55.427018   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 41/120
	I0828 17:18:56.428187   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 42/120
	I0828 17:18:57.429409   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 43/120
	I0828 17:18:58.430585   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 44/120
	I0828 17:18:59.432456   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 45/120
	I0828 17:19:00.434702   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 46/120
	I0828 17:19:01.437252   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 47/120
	I0828 17:19:02.438559   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 48/120
	I0828 17:19:03.440736   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 49/120
	I0828 17:19:04.443090   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 50/120
	I0828 17:19:05.444779   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 51/120
	I0828 17:19:06.446141   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 52/120
	I0828 17:19:07.447842   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 53/120
	I0828 17:19:08.448985   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 54/120
	I0828 17:19:09.450898   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 55/120
	I0828 17:19:10.452082   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 56/120
	I0828 17:19:11.453375   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 57/120
	I0828 17:19:12.455048   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 58/120
	I0828 17:19:13.456528   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 59/120
	I0828 17:19:14.458681   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 60/120
	I0828 17:19:15.459968   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 61/120
	I0828 17:19:16.461437   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 62/120
	I0828 17:19:17.462826   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 63/120
	I0828 17:19:18.464001   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 64/120
	I0828 17:19:19.466171   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 65/120
	I0828 17:19:20.467615   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 66/120
	I0828 17:19:21.468874   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 67/120
	I0828 17:19:22.470154   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 68/120
	I0828 17:19:23.471456   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 69/120
	I0828 17:19:24.473595   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 70/120
	I0828 17:19:25.474927   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 71/120
	I0828 17:19:26.476401   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 72/120
	I0828 17:19:27.477717   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 73/120
	I0828 17:19:28.479064   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 74/120
	I0828 17:19:29.480403   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 75/120
	I0828 17:19:30.481724   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 76/120
	I0828 17:19:31.483039   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 77/120
	I0828 17:19:32.484687   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 78/120
	I0828 17:19:33.486183   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 79/120
	I0828 17:19:34.488249   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 80/120
	I0828 17:19:35.489501   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 81/120
	I0828 17:19:36.490895   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 82/120
	I0828 17:19:37.492169   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 83/120
	I0828 17:19:38.493619   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 84/120
	I0828 17:19:39.495470   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 85/120
	I0828 17:19:40.496934   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 86/120
	I0828 17:19:41.498255   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 87/120
	I0828 17:19:42.500509   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 88/120
	I0828 17:19:43.501637   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 89/120
	I0828 17:19:44.502982   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 90/120
	I0828 17:19:45.504287   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 91/120
	I0828 17:19:46.505616   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 92/120
	I0828 17:19:47.506793   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 93/120
	I0828 17:19:48.508575   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 94/120
	I0828 17:19:49.510432   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 95/120
	I0828 17:19:50.512563   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 96/120
	I0828 17:19:51.513809   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 97/120
	I0828 17:19:52.515386   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 98/120
	I0828 17:19:53.517334   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 99/120
	I0828 17:19:54.519532   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 100/120
	I0828 17:19:55.520888   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 101/120
	I0828 17:19:56.522244   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 102/120
	I0828 17:19:57.524780   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 103/120
	I0828 17:19:58.526060   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 104/120
	I0828 17:19:59.527212   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 105/120
	I0828 17:20:00.528523   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 106/120
	I0828 17:20:01.529965   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 107/120
	I0828 17:20:02.531195   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 108/120
	I0828 17:20:03.532349   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 109/120
	I0828 17:20:04.534656   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 110/120
	I0828 17:20:05.536076   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 111/120
	I0828 17:20:06.537551   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 112/120
	I0828 17:20:07.538762   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 113/120
	I0828 17:20:08.540671   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 114/120
	I0828 17:20:09.542510   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 115/120
	I0828 17:20:10.544574   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 116/120
	I0828 17:20:11.545989   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 117/120
	I0828 17:20:12.547444   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 118/120
	I0828 17:20:13.548894   33635 main.go:141] libmachine: (ha-240486-m02) Waiting for machine to stop 119/120
	I0828 17:20:14.550159   33635 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0828 17:20:14.550280   33635 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-240486 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr: exit status 3 (19.15433292s)

                                                
                                                
-- stdout --
	ha-240486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-240486-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:20:14.590005   34064 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:20:14.590149   34064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:20:14.590161   34064 out.go:358] Setting ErrFile to fd 2...
	I0828 17:20:14.590165   34064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:20:14.590361   34064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:20:14.590579   34064 out.go:352] Setting JSON to false
	I0828 17:20:14.590609   34064 mustload.go:65] Loading cluster: ha-240486
	I0828 17:20:14.590643   34064 notify.go:220] Checking for updates...
	I0828 17:20:14.591176   34064 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:20:14.591196   34064 status.go:255] checking status of ha-240486 ...
	I0828 17:20:14.591655   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:14.591698   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:14.612120   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32869
	I0828 17:20:14.612677   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:14.613199   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:14.613219   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:14.613679   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:14.613878   34064 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:20:14.615782   34064 status.go:330] ha-240486 host status = "Running" (err=<nil>)
	I0828 17:20:14.615799   34064 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:20:14.616174   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:14.616223   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:14.631807   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I0828 17:20:14.632193   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:14.632631   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:14.632648   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:14.632964   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:14.633130   34064 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:20:14.636231   34064 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:14.636660   34064 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:20:14.636693   34064 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:14.636872   34064 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:20:14.637316   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:14.637365   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:14.651946   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43987
	I0828 17:20:14.652357   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:14.652800   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:14.652821   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:14.653126   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:14.653358   34064 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:20:14.653534   34064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:14.653555   34064 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:20:14.656133   34064 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:14.656546   34064 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:20:14.656573   34064 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:14.656702   34064 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:20:14.656871   34064 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:20:14.657003   34064 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:20:14.657154   34064 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:20:14.747074   34064 ssh_runner.go:195] Run: systemctl --version
	I0828 17:20:14.754809   34064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:14.774613   34064 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:20:14.774642   34064 api_server.go:166] Checking apiserver status ...
	I0828 17:20:14.774680   34064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:20:14.789380   34064 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup
	W0828 17:20:14.801157   34064 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:20:14.801212   34064 ssh_runner.go:195] Run: ls
	I0828 17:20:14.805220   34064 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:20:14.811769   34064 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:20:14.811794   34064 status.go:422] ha-240486 apiserver status = Running (err=<nil>)
	I0828 17:20:14.811807   34064 status.go:257] ha-240486 status: &{Name:ha-240486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:20:14.811826   34064 status.go:255] checking status of ha-240486-m02 ...
	I0828 17:20:14.812233   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:14.812273   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:14.827623   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40001
	I0828 17:20:14.828025   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:14.828459   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:14.828479   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:14.828833   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:14.829035   34064 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:20:14.830765   34064 status.go:330] ha-240486-m02 host status = "Running" (err=<nil>)
	I0828 17:20:14.830780   34064 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:20:14.831056   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:14.831087   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:14.845271   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40087
	I0828 17:20:14.845645   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:14.846167   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:14.846188   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:14.846491   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:14.846700   34064 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:20:14.849199   34064 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:14.849573   34064 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:20:14.849597   34064 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:14.849733   34064 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:20:14.850010   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:14.850043   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:14.864919   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0828 17:20:14.865300   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:14.865806   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:14.865826   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:14.866136   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:14.866312   34064 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:20:14.866479   34064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:14.866500   34064 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:20:14.868985   34064 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:14.869471   34064 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:20:14.869492   34064 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:14.869630   34064 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:20:14.869776   34064 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:20:14.869904   34064 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:20:14.870028   34064 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	W0828 17:20:33.354326   34064 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.103:22: connect: no route to host
	W0828 17:20:33.354444   34064 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	E0828 17:20:33.354458   34064 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:20:33.354470   34064 status.go:257] ha-240486-m02 status: &{Name:ha-240486-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0828 17:20:33.354485   34064 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:20:33.354492   34064 status.go:255] checking status of ha-240486-m03 ...
	I0828 17:20:33.354793   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:33.354832   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:33.369525   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
	I0828 17:20:33.369901   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:33.370400   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:33.370423   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:33.370797   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:33.370984   34064 main.go:141] libmachine: (ha-240486-m03) Calling .GetState
	I0828 17:20:33.372583   34064 status.go:330] ha-240486-m03 host status = "Running" (err=<nil>)
	I0828 17:20:33.372598   34064 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:20:33.372921   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:33.372956   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:33.387772   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0828 17:20:33.388201   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:33.388683   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:33.388707   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:33.389006   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:33.389189   34064 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:20:33.391783   34064 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:33.392195   34064 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:20:33.392221   34064 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:33.392359   34064 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:20:33.392650   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:33.392687   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:33.407087   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44933
	I0828 17:20:33.407466   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:33.407964   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:33.407994   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:33.408300   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:33.408498   34064 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:20:33.408663   34064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:33.408678   34064 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:20:33.411485   34064 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:33.411920   34064 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:20:33.411955   34064 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:33.412128   34064 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:20:33.412290   34064 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:20:33.412470   34064 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:20:33.412598   34064 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:20:33.494607   34064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:33.511164   34064 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:20:33.511190   34064 api_server.go:166] Checking apiserver status ...
	I0828 17:20:33.511221   34064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:20:33.526234   34064 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	W0828 17:20:33.536908   34064 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:20:33.536957   34064 ssh_runner.go:195] Run: ls
	I0828 17:20:33.541314   34064 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:20:33.545817   34064 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:20:33.545836   34064 status.go:422] ha-240486-m03 apiserver status = Running (err=<nil>)
	I0828 17:20:33.545847   34064 status.go:257] ha-240486-m03 status: &{Name:ha-240486-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:20:33.545862   34064 status.go:255] checking status of ha-240486-m04 ...
	I0828 17:20:33.546260   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:33.546299   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:33.561503   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46829
	I0828 17:20:33.562005   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:33.562493   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:33.562512   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:33.562794   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:33.562981   34064 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:20:33.564481   34064 status.go:330] ha-240486-m04 host status = "Running" (err=<nil>)
	I0828 17:20:33.564494   34064 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:20:33.564884   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:33.564925   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:33.580205   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32887
	I0828 17:20:33.580599   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:33.581117   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:33.581138   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:33.581430   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:33.581623   34064 main.go:141] libmachine: (ha-240486-m04) Calling .GetIP
	I0828 17:20:33.584479   34064 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:33.584870   34064 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:20:33.584901   34064 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:33.585096   34064 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:20:33.585506   34064 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:33.585551   34064 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:33.600400   34064 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42969
	I0828 17:20:33.600782   34064 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:33.601207   34064 main.go:141] libmachine: Using API Version  1
	I0828 17:20:33.601228   34064 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:33.601502   34064 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:33.601829   34064 main.go:141] libmachine: (ha-240486-m04) Calling .DriverName
	I0828 17:20:33.602008   34064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:33.602032   34064 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHHostname
	I0828 17:20:33.604792   34064 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:33.605170   34064 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:20:33.605209   34064 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:33.605328   34064 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHPort
	I0828 17:20:33.605475   34064 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHKeyPath
	I0828 17:20:33.605586   34064 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHUsername
	I0828 17:20:33.605709   34064 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m04/id_rsa Username:docker}
	I0828 17:20:33.687104   34064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:33.703853   34064 status.go:257] ha-240486-m04 status: &{Name:ha-240486-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-240486 -n ha-240486
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-240486 logs -n 25: (1.419234203s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3516631358/001/cp-test_ha-240486-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486:/home/docker/cp-test_ha-240486-m03_ha-240486.txt                       |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486 sudo cat                                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m03_ha-240486.txt                                 |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m02:/home/docker/cp-test_ha-240486-m03_ha-240486-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m02 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m03_ha-240486-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04:/home/docker/cp-test_ha-240486-m03_ha-240486-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m04 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m03_ha-240486-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp testdata/cp-test.txt                                                | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3516631358/001/cp-test_ha-240486-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486:/home/docker/cp-test_ha-240486-m04_ha-240486.txt                       |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486 sudo cat                                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486.txt                                 |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m02:/home/docker/cp-test_ha-240486-m04_ha-240486-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m02 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03:/home/docker/cp-test_ha-240486-m04_ha-240486-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m03 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-240486 node stop m02 -v=7                                                     | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 17:13:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 17:13:48.262328   29200 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:13:48.262571   29200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:13:48.262579   29200 out.go:358] Setting ErrFile to fd 2...
	I0828 17:13:48.262584   29200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:13:48.262740   29200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:13:48.263283   29200 out.go:352] Setting JSON to false
	I0828 17:13:48.264133   29200 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3374,"bootTime":1724861854,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 17:13:48.264183   29200 start.go:139] virtualization: kvm guest
	I0828 17:13:48.266113   29200 out.go:177] * [ha-240486] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 17:13:48.267263   29200 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:13:48.267283   29200 notify.go:220] Checking for updates...
	I0828 17:13:48.269420   29200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:13:48.270714   29200 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:13:48.271818   29200 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:13:48.273007   29200 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 17:13:48.274135   29200 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:13:48.275295   29200 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:13:48.309572   29200 out.go:177] * Using the kvm2 driver based on user configuration
	I0828 17:13:48.310717   29200 start.go:297] selected driver: kvm2
	I0828 17:13:48.310731   29200 start.go:901] validating driver "kvm2" against <nil>
	I0828 17:13:48.310747   29200 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:13:48.311429   29200 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:13:48.311503   29200 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 17:13:48.327499   29200 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 17:13:48.327546   29200 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 17:13:48.327783   29200 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:13:48.327856   29200 cni.go:84] Creating CNI manager for ""
	I0828 17:13:48.327870   29200 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0828 17:13:48.327878   29200 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0828 17:13:48.327941   29200 start.go:340] cluster config:
	{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0828 17:13:48.328042   29200 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:13:48.329722   29200 out.go:177] * Starting "ha-240486" primary control-plane node in "ha-240486" cluster
	I0828 17:13:48.330806   29200 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:13:48.330841   29200 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 17:13:48.330853   29200 cache.go:56] Caching tarball of preloaded images
	I0828 17:13:48.330952   29200 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 17:13:48.330969   29200 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 17:13:48.331293   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:13:48.331317   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json: {Name:mkc18ce99584c5845a4945732a372403690216b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:13:48.331469   29200 start.go:360] acquireMachinesLock for ha-240486: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 17:13:48.331509   29200 start.go:364] duration metric: took 23.247µs to acquireMachinesLock for "ha-240486"
	I0828 17:13:48.331531   29200 start.go:93] Provisioning new machine with config: &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:13:48.331597   29200 start.go:125] createHost starting for "" (driver="kvm2")
	I0828 17:13:48.333046   29200 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 17:13:48.333193   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:13:48.333236   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:13:48.347585   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0828 17:13:48.348066   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:13:48.348580   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:13:48.348607   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:13:48.348949   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:13:48.349129   29200 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:13:48.349265   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:13:48.349448   29200 start.go:159] libmachine.API.Create for "ha-240486" (driver="kvm2")
	I0828 17:13:48.349473   29200 client.go:168] LocalClient.Create starting
	I0828 17:13:48.349513   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 17:13:48.349548   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:13:48.349575   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:13:48.349662   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 17:13:48.349689   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:13:48.349716   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:13:48.349740   29200 main.go:141] libmachine: Running pre-create checks...
	I0828 17:13:48.349751   29200 main.go:141] libmachine: (ha-240486) Calling .PreCreateCheck
	I0828 17:13:48.350123   29200 main.go:141] libmachine: (ha-240486) Calling .GetConfigRaw
	I0828 17:13:48.350527   29200 main.go:141] libmachine: Creating machine...
	I0828 17:13:48.350539   29200 main.go:141] libmachine: (ha-240486) Calling .Create
	I0828 17:13:48.350664   29200 main.go:141] libmachine: (ha-240486) Creating KVM machine...
	I0828 17:13:48.351731   29200 main.go:141] libmachine: (ha-240486) DBG | found existing default KVM network
	I0828 17:13:48.352350   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:48.352226   29223 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014730}
	I0828 17:13:48.352434   29200 main.go:141] libmachine: (ha-240486) DBG | created network xml: 
	I0828 17:13:48.352456   29200 main.go:141] libmachine: (ha-240486) DBG | <network>
	I0828 17:13:48.352467   29200 main.go:141] libmachine: (ha-240486) DBG |   <name>mk-ha-240486</name>
	I0828 17:13:48.352477   29200 main.go:141] libmachine: (ha-240486) DBG |   <dns enable='no'/>
	I0828 17:13:48.352497   29200 main.go:141] libmachine: (ha-240486) DBG |   
	I0828 17:13:48.352517   29200 main.go:141] libmachine: (ha-240486) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0828 17:13:48.352523   29200 main.go:141] libmachine: (ha-240486) DBG |     <dhcp>
	I0828 17:13:48.352529   29200 main.go:141] libmachine: (ha-240486) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0828 17:13:48.352535   29200 main.go:141] libmachine: (ha-240486) DBG |     </dhcp>
	I0828 17:13:48.352540   29200 main.go:141] libmachine: (ha-240486) DBG |   </ip>
	I0828 17:13:48.352545   29200 main.go:141] libmachine: (ha-240486) DBG |   
	I0828 17:13:48.352551   29200 main.go:141] libmachine: (ha-240486) DBG | </network>
	I0828 17:13:48.352559   29200 main.go:141] libmachine: (ha-240486) DBG | 
	I0828 17:13:48.357237   29200 main.go:141] libmachine: (ha-240486) DBG | trying to create private KVM network mk-ha-240486 192.168.39.0/24...
	I0828 17:13:48.421793   29200 main.go:141] libmachine: (ha-240486) DBG | private KVM network mk-ha-240486 192.168.39.0/24 created
	I0828 17:13:48.421848   29200 main.go:141] libmachine: (ha-240486) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486 ...
	I0828 17:13:48.421865   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:48.421778   29223 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:13:48.421890   29200 main.go:141] libmachine: (ha-240486) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 17:13:48.421984   29200 main.go:141] libmachine: (ha-240486) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 17:13:48.660331   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:48.660212   29223 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa...
	I0828 17:13:48.911596   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:48.911454   29223 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/ha-240486.rawdisk...
	I0828 17:13:48.911642   29200 main.go:141] libmachine: (ha-240486) DBG | Writing magic tar header
	I0828 17:13:48.911652   29200 main.go:141] libmachine: (ha-240486) DBG | Writing SSH key tar header
	I0828 17:13:48.911660   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:48.911573   29223 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486 ...
	I0828 17:13:48.911670   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486
	I0828 17:13:48.911715   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486 (perms=drwx------)
	I0828 17:13:48.911741   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 17:13:48.911752   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 17:13:48.911781   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:13:48.911788   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 17:13:48.911800   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 17:13:48.911809   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 17:13:48.911824   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 17:13:48.911839   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 17:13:48.911844   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 17:13:48.911853   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins
	I0828 17:13:48.911860   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home
	I0828 17:13:48.911865   29200 main.go:141] libmachine: (ha-240486) Creating domain...
	I0828 17:13:48.911871   29200 main.go:141] libmachine: (ha-240486) DBG | Skipping /home - not owner
	I0828 17:13:48.912933   29200 main.go:141] libmachine: (ha-240486) define libvirt domain using xml: 
	I0828 17:13:48.912959   29200 main.go:141] libmachine: (ha-240486) <domain type='kvm'>
	I0828 17:13:48.912981   29200 main.go:141] libmachine: (ha-240486)   <name>ha-240486</name>
	I0828 17:13:48.912994   29200 main.go:141] libmachine: (ha-240486)   <memory unit='MiB'>2200</memory>
	I0828 17:13:48.913022   29200 main.go:141] libmachine: (ha-240486)   <vcpu>2</vcpu>
	I0828 17:13:48.913040   29200 main.go:141] libmachine: (ha-240486)   <features>
	I0828 17:13:48.913048   29200 main.go:141] libmachine: (ha-240486)     <acpi/>
	I0828 17:13:48.913055   29200 main.go:141] libmachine: (ha-240486)     <apic/>
	I0828 17:13:48.913061   29200 main.go:141] libmachine: (ha-240486)     <pae/>
	I0828 17:13:48.913074   29200 main.go:141] libmachine: (ha-240486)     
	I0828 17:13:48.913083   29200 main.go:141] libmachine: (ha-240486)   </features>
	I0828 17:13:48.913094   29200 main.go:141] libmachine: (ha-240486)   <cpu mode='host-passthrough'>
	I0828 17:13:48.913106   29200 main.go:141] libmachine: (ha-240486)   
	I0828 17:13:48.913120   29200 main.go:141] libmachine: (ha-240486)   </cpu>
	I0828 17:13:48.913131   29200 main.go:141] libmachine: (ha-240486)   <os>
	I0828 17:13:48.913139   29200 main.go:141] libmachine: (ha-240486)     <type>hvm</type>
	I0828 17:13:48.913144   29200 main.go:141] libmachine: (ha-240486)     <boot dev='cdrom'/>
	I0828 17:13:48.913151   29200 main.go:141] libmachine: (ha-240486)     <boot dev='hd'/>
	I0828 17:13:48.913157   29200 main.go:141] libmachine: (ha-240486)     <bootmenu enable='no'/>
	I0828 17:13:48.913164   29200 main.go:141] libmachine: (ha-240486)   </os>
	I0828 17:13:48.913177   29200 main.go:141] libmachine: (ha-240486)   <devices>
	I0828 17:13:48.913187   29200 main.go:141] libmachine: (ha-240486)     <disk type='file' device='cdrom'>
	I0828 17:13:48.913217   29200 main.go:141] libmachine: (ha-240486)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/boot2docker.iso'/>
	I0828 17:13:48.913241   29200 main.go:141] libmachine: (ha-240486)       <target dev='hdc' bus='scsi'/>
	I0828 17:13:48.913255   29200 main.go:141] libmachine: (ha-240486)       <readonly/>
	I0828 17:13:48.913269   29200 main.go:141] libmachine: (ha-240486)     </disk>
	I0828 17:13:48.913287   29200 main.go:141] libmachine: (ha-240486)     <disk type='file' device='disk'>
	I0828 17:13:48.913303   29200 main.go:141] libmachine: (ha-240486)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 17:13:48.913318   29200 main.go:141] libmachine: (ha-240486)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/ha-240486.rawdisk'/>
	I0828 17:13:48.913329   29200 main.go:141] libmachine: (ha-240486)       <target dev='hda' bus='virtio'/>
	I0828 17:13:48.913339   29200 main.go:141] libmachine: (ha-240486)     </disk>
	I0828 17:13:48.913347   29200 main.go:141] libmachine: (ha-240486)     <interface type='network'>
	I0828 17:13:48.913360   29200 main.go:141] libmachine: (ha-240486)       <source network='mk-ha-240486'/>
	I0828 17:13:48.913370   29200 main.go:141] libmachine: (ha-240486)       <model type='virtio'/>
	I0828 17:13:48.913387   29200 main.go:141] libmachine: (ha-240486)     </interface>
	I0828 17:13:48.913403   29200 main.go:141] libmachine: (ha-240486)     <interface type='network'>
	I0828 17:13:48.913411   29200 main.go:141] libmachine: (ha-240486)       <source network='default'/>
	I0828 17:13:48.913421   29200 main.go:141] libmachine: (ha-240486)       <model type='virtio'/>
	I0828 17:13:48.913433   29200 main.go:141] libmachine: (ha-240486)     </interface>
	I0828 17:13:48.913444   29200 main.go:141] libmachine: (ha-240486)     <serial type='pty'>
	I0828 17:13:48.913456   29200 main.go:141] libmachine: (ha-240486)       <target port='0'/>
	I0828 17:13:48.913465   29200 main.go:141] libmachine: (ha-240486)     </serial>
	I0828 17:13:48.913494   29200 main.go:141] libmachine: (ha-240486)     <console type='pty'>
	I0828 17:13:48.913512   29200 main.go:141] libmachine: (ha-240486)       <target type='serial' port='0'/>
	I0828 17:13:48.913523   29200 main.go:141] libmachine: (ha-240486)     </console>
	I0828 17:13:48.913533   29200 main.go:141] libmachine: (ha-240486)     <rng model='virtio'>
	I0828 17:13:48.913546   29200 main.go:141] libmachine: (ha-240486)       <backend model='random'>/dev/random</backend>
	I0828 17:13:48.913564   29200 main.go:141] libmachine: (ha-240486)     </rng>
	I0828 17:13:48.913573   29200 main.go:141] libmachine: (ha-240486)     
	I0828 17:13:48.913584   29200 main.go:141] libmachine: (ha-240486)     
	I0828 17:13:48.913592   29200 main.go:141] libmachine: (ha-240486)   </devices>
	I0828 17:13:48.913601   29200 main.go:141] libmachine: (ha-240486) </domain>
	I0828 17:13:48.913613   29200 main.go:141] libmachine: (ha-240486) 
	I0828 17:13:48.918440   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:69:65:7c in network default
	I0828 17:13:48.919008   29200 main.go:141] libmachine: (ha-240486) Ensuring networks are active...
	I0828 17:13:48.919027   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:48.919726   29200 main.go:141] libmachine: (ha-240486) Ensuring network default is active
	I0828 17:13:48.920030   29200 main.go:141] libmachine: (ha-240486) Ensuring network mk-ha-240486 is active
	I0828 17:13:48.920468   29200 main.go:141] libmachine: (ha-240486) Getting domain xml...
	I0828 17:13:48.921207   29200 main.go:141] libmachine: (ha-240486) Creating domain...
	I0828 17:13:50.102460   29200 main.go:141] libmachine: (ha-240486) Waiting to get IP...
	I0828 17:13:50.103099   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:50.103421   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:50.103472   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:50.103409   29223 retry.go:31] will retry after 253.535151ms: waiting for machine to come up
	I0828 17:13:50.359134   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:50.359644   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:50.359687   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:50.359620   29223 retry.go:31] will retry after 316.872772ms: waiting for machine to come up
	I0828 17:13:50.678183   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:50.678576   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:50.678598   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:50.678528   29223 retry.go:31] will retry after 461.024783ms: waiting for machine to come up
	I0828 17:13:51.140747   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:51.141160   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:51.141187   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:51.141110   29223 retry.go:31] will retry after 397.899332ms: waiting for machine to come up
	I0828 17:13:51.540611   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:51.540944   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:51.540970   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:51.540900   29223 retry.go:31] will retry after 522.638296ms: waiting for machine to come up
	I0828 17:13:52.064600   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:52.064967   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:52.064991   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:52.064946   29223 retry.go:31] will retry after 589.769235ms: waiting for machine to come up
	I0828 17:13:52.656653   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:52.657074   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:52.657113   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:52.657020   29223 retry.go:31] will retry after 753.231977ms: waiting for machine to come up
	I0828 17:13:53.411846   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:53.412189   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:53.412210   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:53.412163   29223 retry.go:31] will retry after 954.837864ms: waiting for machine to come up
	I0828 17:13:54.368491   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:54.368908   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:54.368931   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:54.368870   29223 retry.go:31] will retry after 1.471935642s: waiting for machine to come up
	I0828 17:13:55.841866   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:55.842270   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:55.842294   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:55.842208   29223 retry.go:31] will retry after 2.247459315s: waiting for machine to come up
	I0828 17:13:58.092692   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:58.093213   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:58.093266   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:58.093202   29223 retry.go:31] will retry after 2.877612232s: waiting for machine to come up
	I0828 17:14:00.974142   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:00.974458   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:14:00.974476   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:14:00.974435   29223 retry.go:31] will retry after 3.170605692s: waiting for machine to come up
	I0828 17:14:04.146350   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:04.146852   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:14:04.146877   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:14:04.146813   29223 retry.go:31] will retry after 3.284470654s: waiting for machine to come up
	I0828 17:14:07.435035   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.435406   29200 main.go:141] libmachine: (ha-240486) Found IP for machine: 192.168.39.227
	I0828 17:14:07.435435   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has current primary IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.435444   29200 main.go:141] libmachine: (ha-240486) Reserving static IP address...
	I0828 17:14:07.435821   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find host DHCP lease matching {name: "ha-240486", mac: "52:54:00:3e:e0:a1", ip: "192.168.39.227"} in network mk-ha-240486
	I0828 17:14:07.506358   29200 main.go:141] libmachine: (ha-240486) DBG | Getting to WaitForSSH function...
	I0828 17:14:07.506380   29200 main.go:141] libmachine: (ha-240486) Reserved static IP address: 192.168.39.227
	I0828 17:14:07.506390   29200 main.go:141] libmachine: (ha-240486) Waiting for SSH to be available...
	I0828 17:14:07.508836   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.509214   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:07.509240   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.509354   29200 main.go:141] libmachine: (ha-240486) DBG | Using SSH client type: external
	I0828 17:14:07.509374   29200 main.go:141] libmachine: (ha-240486) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa (-rw-------)
	I0828 17:14:07.509408   29200 main.go:141] libmachine: (ha-240486) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 17:14:07.509427   29200 main.go:141] libmachine: (ha-240486) DBG | About to run SSH command:
	I0828 17:14:07.509439   29200 main.go:141] libmachine: (ha-240486) DBG | exit 0
	I0828 17:14:07.633862   29200 main.go:141] libmachine: (ha-240486) DBG | SSH cmd err, output: <nil>: 
	I0828 17:14:07.634121   29200 main.go:141] libmachine: (ha-240486) KVM machine creation complete!
	I0828 17:14:07.634446   29200 main.go:141] libmachine: (ha-240486) Calling .GetConfigRaw
	I0828 17:14:07.635133   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:07.635456   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:07.635666   29200 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 17:14:07.635683   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:14:07.636928   29200 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 17:14:07.636943   29200 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 17:14:07.636949   29200 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 17:14:07.636955   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:07.639165   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.639485   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:07.639516   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.639625   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:07.639802   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.639938   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.640074   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:07.640191   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:07.640420   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:07.640433   29200 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 17:14:07.745224   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:14:07.745254   29200 main.go:141] libmachine: Detecting the provisioner...
	I0828 17:14:07.745263   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:07.747753   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.748023   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:07.748050   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.748171   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:07.748341   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.748522   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.748674   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:07.748855   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:07.749022   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:07.749032   29200 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 17:14:07.854381   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 17:14:07.854464   29200 main.go:141] libmachine: found compatible host: buildroot
	I0828 17:14:07.854473   29200 main.go:141] libmachine: Provisioning with buildroot...
	I0828 17:14:07.854480   29200 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:14:07.854702   29200 buildroot.go:166] provisioning hostname "ha-240486"
	I0828 17:14:07.854716   29200 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:14:07.854879   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:07.857404   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.857710   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:07.857744   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.857904   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:07.858065   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.858281   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.858407   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:07.858556   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:07.858706   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:07.858717   29200 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-240486 && echo "ha-240486" | sudo tee /etc/hostname
	I0828 17:14:07.975330   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-240486
	
	I0828 17:14:07.975405   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:07.977872   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.978216   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:07.978243   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.978429   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:07.978601   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.978743   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.978859   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:07.979002   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:07.979203   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:07.979220   29200 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-240486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-240486/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-240486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:14:08.094018   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:14:08.094049   29200 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 17:14:08.094137   29200 buildroot.go:174] setting up certificates
	I0828 17:14:08.094169   29200 provision.go:84] configureAuth start
	I0828 17:14:08.094188   29200 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:14:08.094498   29200 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:14:08.097547   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.097924   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.097960   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.098127   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.100405   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.100666   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.100703   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.100814   29200 provision.go:143] copyHostCerts
	I0828 17:14:08.100848   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:14:08.100884   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 17:14:08.100906   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:14:08.100984   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 17:14:08.101076   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:14:08.101098   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 17:14:08.101103   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:14:08.101129   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 17:14:08.101176   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:14:08.101195   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 17:14:08.101202   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:14:08.101225   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 17:14:08.101277   29200 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.ha-240486 san=[127.0.0.1 192.168.39.227 ha-240486 localhost minikube]
	I0828 17:14:08.164479   29200 provision.go:177] copyRemoteCerts
	I0828 17:14:08.164536   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:14:08.164559   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.167061   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.167333   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.167359   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.167512   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.167692   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.167857   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.168015   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:08.251718   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0828 17:14:08.251814   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 17:14:08.275840   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0828 17:14:08.275911   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0828 17:14:08.299681   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0828 17:14:08.299739   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 17:14:08.322881   29200 provision.go:87] duration metric: took 228.695209ms to configureAuth
	I0828 17:14:08.322904   29200 buildroot.go:189] setting minikube options for container-runtime
	I0828 17:14:08.323068   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:14:08.323130   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.325441   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.325777   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.325803   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.326012   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.326217   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.326434   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.326581   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.326771   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:08.326921   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:08.326935   29200 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 17:14:08.546447   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 17:14:08.546479   29200 main.go:141] libmachine: Checking connection to Docker...
	I0828 17:14:08.546487   29200 main.go:141] libmachine: (ha-240486) Calling .GetURL
	I0828 17:14:08.547669   29200 main.go:141] libmachine: (ha-240486) DBG | Using libvirt version 6000000
	I0828 17:14:08.549610   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.549959   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.549990   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.550162   29200 main.go:141] libmachine: Docker is up and running!
	I0828 17:14:08.550175   29200 main.go:141] libmachine: Reticulating splines...
	I0828 17:14:08.550183   29200 client.go:171] duration metric: took 20.200699308s to LocalClient.Create
	I0828 17:14:08.550208   29200 start.go:167] duration metric: took 20.200759521s to libmachine.API.Create "ha-240486"
	I0828 17:14:08.550221   29200 start.go:293] postStartSetup for "ha-240486" (driver="kvm2")
	I0828 17:14:08.550234   29200 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:14:08.550256   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:08.550498   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:14:08.550522   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.552712   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.553058   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.553083   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.553226   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.553400   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.553556   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.553707   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:08.636478   29200 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:14:08.640579   29200 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 17:14:08.640614   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 17:14:08.640678   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 17:14:08.640748   29200 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 17:14:08.640757   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /etc/ssl/certs/175282.pem
	I0828 17:14:08.640843   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 17:14:08.649972   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:14:08.671793   29200 start.go:296] duration metric: took 121.561129ms for postStartSetup
	I0828 17:14:08.671838   29200 main.go:141] libmachine: (ha-240486) Calling .GetConfigRaw
	I0828 17:14:08.672501   29200 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:14:08.675302   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.675557   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.675583   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.675798   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:14:08.676012   29200 start.go:128] duration metric: took 20.344403229s to createHost
	I0828 17:14:08.676035   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.677935   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.678241   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.678266   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.678421   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.678608   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.678749   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.678881   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.679017   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:08.679172   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:08.679182   29200 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 17:14:08.786455   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724865248.759308588
	
	I0828 17:14:08.786484   29200 fix.go:216] guest clock: 1724865248.759308588
	I0828 17:14:08.786512   29200 fix.go:229] Guest: 2024-08-28 17:14:08.759308588 +0000 UTC Remote: 2024-08-28 17:14:08.676025288 +0000 UTC m=+20.448521902 (delta=83.2833ms)
	I0828 17:14:08.786570   29200 fix.go:200] guest clock delta is within tolerance: 83.2833ms
	I0828 17:14:08.786578   29200 start.go:83] releasing machines lock for "ha-240486", held for 20.455057608s
	I0828 17:14:08.786605   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:08.786890   29200 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:14:08.789379   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.789739   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.789765   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.789940   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:08.790393   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:08.790564   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:08.790650   29200 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:14:08.790699   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.790756   29200 ssh_runner.go:195] Run: cat /version.json
	I0828 17:14:08.790792   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.793063   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.793216   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.793339   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.793365   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.793460   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.793594   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.793615   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.793618   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.793774   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.793799   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.793972   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:08.793986   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.794148   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.794330   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:08.910458   29200 ssh_runner.go:195] Run: systemctl --version
	I0828 17:14:08.916156   29200 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 17:14:09.069065   29200 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 17:14:09.076762   29200 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 17:14:09.076828   29200 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:14:09.091408   29200 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 17:14:09.091429   29200 start.go:495] detecting cgroup driver to use...
	I0828 17:14:09.091489   29200 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 17:14:09.106472   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 17:14:09.119494   29200 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:14:09.119550   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:14:09.132644   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:14:09.145357   29200 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:14:09.251477   29200 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:14:09.388289   29200 docker.go:233] disabling docker service ...
	I0828 17:14:09.388378   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:14:09.402234   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:14:09.414586   29200 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:14:09.544027   29200 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:14:09.673320   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:14:09.686322   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:14:09.703741   29200 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 17:14:09.703791   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.713385   29200 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 17:14:09.713448   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.723776   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.736981   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.747413   29200 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:14:09.757150   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.767031   29200 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.782769   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.792250   29200 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:14:09.800963   29200 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 17:14:09.801007   29200 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 17:14:09.813554   29200 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:14:09.822146   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:14:09.947026   29200 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 17:14:10.034549   29200 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 17:14:10.034618   29200 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 17:14:10.039646   29200 start.go:563] Will wait 60s for crictl version
	I0828 17:14:10.039710   29200 ssh_runner.go:195] Run: which crictl
	I0828 17:14:10.043145   29200 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:14:10.080667   29200 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 17:14:10.080736   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:14:10.107279   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:14:10.140331   29200 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 17:14:10.141540   29200 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:14:10.144150   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:10.144534   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:10.144558   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:10.144717   29200 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 17:14:10.148719   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:14:10.161645   29200 kubeadm.go:883] updating cluster {Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 17:14:10.161744   29200 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:14:10.161791   29200 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:14:10.193008   29200 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 17:14:10.193077   29200 ssh_runner.go:195] Run: which lz4
	I0828 17:14:10.196704   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0828 17:14:10.196806   29200 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 17:14:10.200474   29200 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 17:14:10.200512   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 17:14:11.366594   29200 crio.go:462] duration metric: took 1.169821448s to copy over tarball
	I0828 17:14:11.366678   29200 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 17:14:13.336766   29200 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.970049905s)
	I0828 17:14:13.336812   29200 crio.go:469] duration metric: took 1.970174251s to extract the tarball
	I0828 17:14:13.336823   29200 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 17:14:13.372537   29200 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:14:13.414366   29200 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 17:14:13.414391   29200 cache_images.go:84] Images are preloaded, skipping loading
	I0828 17:14:13.414398   29200 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0 crio true true} ...
	I0828 17:14:13.414499   29200 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-240486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:14:13.414566   29200 ssh_runner.go:195] Run: crio config
	I0828 17:14:13.461771   29200 cni.go:84] Creating CNI manager for ""
	I0828 17:14:13.461787   29200 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0828 17:14:13.461797   29200 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 17:14:13.461819   29200 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-240486 NodeName:ha-240486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 17:14:13.461952   29200 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-240486"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 17:14:13.461974   29200 kube-vip.go:115] generating kube-vip config ...
	I0828 17:14:13.462016   29200 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0828 17:14:13.478842   29200 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0828 17:14:13.478947   29200 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0828 17:14:13.479005   29200 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 17:14:13.488191   29200 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 17:14:13.488260   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0828 17:14:13.497268   29200 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0828 17:14:13.512417   29200 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:14:13.527562   29200 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0828 17:14:13.542655   29200 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0828 17:14:13.557823   29200 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0828 17:14:13.561389   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:14:13.572690   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:14:13.688585   29200 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:14:13.704412   29200 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486 for IP: 192.168.39.227
	I0828 17:14:13.704444   29200 certs.go:194] generating shared ca certs ...
	I0828 17:14:13.704461   29200 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:13.704627   29200 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 17:14:13.704668   29200 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 17:14:13.704676   29200 certs.go:256] generating profile certs ...
	I0828 17:14:13.704733   29200 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key
	I0828 17:14:13.704749   29200 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt with IP's: []
	I0828 17:14:13.831682   29200 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt ...
	I0828 17:14:13.831708   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt: {Name:mk66759107edf8d0bebbbe02121a430074fdfe10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:13.831896   29200 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key ...
	I0828 17:14:13.831911   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key: {Name:mkf62adf398d03ad935437fbd19c6e593dd9b953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:13.831994   29200 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.e15d05bd
	I0828 17:14:13.832008   29200 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.e15d05bd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.254]
	I0828 17:14:14.103313   29200 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.e15d05bd ...
	I0828 17:14:14.103342   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.e15d05bd: {Name:mkb51258da04d783bb7cf6695912752804f8bdd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:14.103493   29200 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.e15d05bd ...
	I0828 17:14:14.103505   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.e15d05bd: {Name:mkd920f9d9856108b94330ec655e07e394a548c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:14.103572   29200 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.e15d05bd -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt
	I0828 17:14:14.103669   29200 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.e15d05bd -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key
	I0828 17:14:14.103723   29200 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key
	I0828 17:14:14.103763   29200 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt with IP's: []
	I0828 17:14:14.189744   29200 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt ...
	I0828 17:14:14.189777   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt: {Name:mkf86a5e9ba97890f5f5fab87c5e67448d427d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:14.189928   29200 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key ...
	I0828 17:14:14.189939   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key: {Name:mkffc092eac46d4d3d8650d02f5802b03fae0e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:14.190003   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0828 17:14:14.190020   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0828 17:14:14.190030   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0828 17:14:14.190043   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0828 17:14:14.190054   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0828 17:14:14.190069   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0828 17:14:14.190107   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0828 17:14:14.190124   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0828 17:14:14.190179   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 17:14:14.190217   29200 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 17:14:14.190227   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:14:14.190250   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 17:14:14.190273   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:14:14.190294   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 17:14:14.190337   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:14:14.190363   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:14.190378   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem -> /usr/share/ca-certificates/17528.pem
	I0828 17:14:14.190404   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /usr/share/ca-certificates/175282.pem
	I0828 17:14:14.190946   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:14:14.214763   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 17:14:14.236519   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:14:14.258332   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:14:14.283248   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 17:14:14.307773   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 17:14:14.332795   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:14:14.355311   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 17:14:14.377602   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:14:14.399309   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 17:14:14.421842   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 17:14:14.445679   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 17:14:14.478814   29200 ssh_runner.go:195] Run: openssl version
	I0828 17:14:14.485538   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:14:14.502012   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:14.506266   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:14.506327   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:14.511879   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:14:14.521746   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 17:14:14.532498   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 17:14:14.536782   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:14:14.536832   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 17:14:14.542304   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 17:14:14.552354   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 17:14:14.562443   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 17:14:14.566315   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:14:14.566367   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 17:14:14.571462   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 17:14:14.581095   29200 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:14:14.584646   29200 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 17:14:14.584705   29200 kubeadm.go:392] StartCluster: {Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:14:14.584806   29200 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 17:14:14.584864   29200 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 17:14:14.622690   29200 cri.go:89] found id: ""
	I0828 17:14:14.622760   29200 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 17:14:14.632258   29200 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 17:14:14.641430   29200 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 17:14:14.650444   29200 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 17:14:14.650458   29200 kubeadm.go:157] found existing configuration files:
	
	I0828 17:14:14.650509   29200 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 17:14:14.658834   29200 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 17:14:14.658896   29200 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 17:14:14.667393   29200 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 17:14:14.675746   29200 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 17:14:14.675791   29200 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 17:14:14.684492   29200 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 17:14:14.692555   29200 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 17:14:14.692595   29200 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 17:14:14.700992   29200 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 17:14:14.709128   29200 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 17:14:14.709169   29200 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 17:14:14.717497   29200 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 17:14:14.808101   29200 kubeadm.go:310] W0828 17:14:14.788966     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 17:14:14.808801   29200 kubeadm.go:310] W0828 17:14:14.789782     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 17:14:14.908998   29200 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 17:14:25.023761   29200 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 17:14:25.023809   29200 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 17:14:25.023885   29200 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 17:14:25.023985   29200 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 17:14:25.024061   29200 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 17:14:25.024155   29200 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 17:14:25.025534   29200 out.go:235]   - Generating certificates and keys ...
	I0828 17:14:25.025609   29200 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 17:14:25.025669   29200 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 17:14:25.025738   29200 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 17:14:25.025816   29200 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 17:14:25.025904   29200 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 17:14:25.025979   29200 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 17:14:25.026045   29200 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 17:14:25.026225   29200 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-240486 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0828 17:14:25.026305   29200 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 17:14:25.026486   29200 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-240486 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0828 17:14:25.026580   29200 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 17:14:25.026673   29200 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 17:14:25.026739   29200 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 17:14:25.026814   29200 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 17:14:25.026888   29200 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 17:14:25.026969   29200 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 17:14:25.027053   29200 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 17:14:25.027142   29200 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 17:14:25.027221   29200 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 17:14:25.027322   29200 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 17:14:25.027386   29200 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 17:14:25.028852   29200 out.go:235]   - Booting up control plane ...
	I0828 17:14:25.028955   29200 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 17:14:25.029069   29200 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 17:14:25.029133   29200 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 17:14:25.029257   29200 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 17:14:25.029365   29200 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 17:14:25.029420   29200 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 17:14:25.029536   29200 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 17:14:25.029672   29200 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 17:14:25.029743   29200 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.178877ms
	I0828 17:14:25.029843   29200 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 17:14:25.029933   29200 kubeadm.go:310] [api-check] The API server is healthy after 6.085182884s
	I0828 17:14:25.030051   29200 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 17:14:25.030210   29200 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 17:14:25.030297   29200 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 17:14:25.030528   29200 kubeadm.go:310] [mark-control-plane] Marking the node ha-240486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 17:14:25.030605   29200 kubeadm.go:310] [bootstrap-token] Using token: tx0kpz.xk8c8jbbyazjlymg
	I0828 17:14:25.031867   29200 out.go:235]   - Configuring RBAC rules ...
	I0828 17:14:25.031978   29200 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 17:14:25.032069   29200 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 17:14:25.032254   29200 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 17:14:25.032417   29200 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 17:14:25.032543   29200 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 17:14:25.032621   29200 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 17:14:25.032724   29200 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 17:14:25.032761   29200 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 17:14:25.032808   29200 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 17:14:25.032819   29200 kubeadm.go:310] 
	I0828 17:14:25.032870   29200 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 17:14:25.032880   29200 kubeadm.go:310] 
	I0828 17:14:25.032968   29200 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 17:14:25.032974   29200 kubeadm.go:310] 
	I0828 17:14:25.032995   29200 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 17:14:25.033047   29200 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 17:14:25.033092   29200 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 17:14:25.033098   29200 kubeadm.go:310] 
	I0828 17:14:25.033145   29200 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 17:14:25.033151   29200 kubeadm.go:310] 
	I0828 17:14:25.033190   29200 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 17:14:25.033201   29200 kubeadm.go:310] 
	I0828 17:14:25.033241   29200 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 17:14:25.033303   29200 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 17:14:25.033362   29200 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 17:14:25.033368   29200 kubeadm.go:310] 
	I0828 17:14:25.033439   29200 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 17:14:25.033506   29200 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 17:14:25.033512   29200 kubeadm.go:310] 
	I0828 17:14:25.033577   29200 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tx0kpz.xk8c8jbbyazjlymg \
	I0828 17:14:25.033693   29200 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 17:14:25.033725   29200 kubeadm.go:310] 	--control-plane 
	I0828 17:14:25.033732   29200 kubeadm.go:310] 
	I0828 17:14:25.033799   29200 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 17:14:25.033806   29200 kubeadm.go:310] 
	I0828 17:14:25.033885   29200 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tx0kpz.xk8c8jbbyazjlymg \
	I0828 17:14:25.033980   29200 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 17:14:25.033990   29200 cni.go:84] Creating CNI manager for ""
	I0828 17:14:25.033995   29200 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0828 17:14:25.036206   29200 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0828 17:14:25.037364   29200 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0828 17:14:25.042564   29200 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0828 17:14:25.042579   29200 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0828 17:14:25.062153   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0828 17:14:25.440327   29200 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 17:14:25.440444   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:25.440452   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-240486 minikube.k8s.io/updated_at=2024_08_28T17_14_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=ha-240486 minikube.k8s.io/primary=true
	I0828 17:14:25.460051   29200 ops.go:34] apiserver oom_adj: -16
	I0828 17:14:25.664735   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:26.165116   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:26.664899   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:27.165714   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:27.665331   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:28.164925   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:28.665631   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:28.775282   29200 kubeadm.go:1113] duration metric: took 3.334915766s to wait for elevateKubeSystemPrivileges
	I0828 17:14:28.775319   29200 kubeadm.go:394] duration metric: took 14.190618055s to StartCluster
	I0828 17:14:28.775342   29200 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:28.775423   29200 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:14:28.776337   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:28.776575   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0828 17:14:28.776597   29200 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:14:28.776633   29200 start.go:241] waiting for startup goroutines ...
	I0828 17:14:28.776650   29200 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 17:14:28.776743   29200 addons.go:69] Setting storage-provisioner=true in profile "ha-240486"
	I0828 17:14:28.776779   29200 addons.go:234] Setting addon storage-provisioner=true in "ha-240486"
	I0828 17:14:28.776813   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:14:28.776822   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:14:28.776746   29200 addons.go:69] Setting default-storageclass=true in profile "ha-240486"
	I0828 17:14:28.776888   29200 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-240486"
	I0828 17:14:28.777244   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:28.777291   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:28.777318   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:28.777353   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:28.792012   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0828 17:14:28.792453   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43091
	I0828 17:14:28.792518   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:28.792801   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:28.793018   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:28.793044   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:28.793303   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:28.793326   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:28.793388   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:28.793583   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:14:28.793636   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:28.794132   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:28.794170   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:28.795658   29200 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:14:28.795927   29200 kapi.go:59] client config for ha-240486: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt", KeyFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key", CAFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 17:14:28.796459   29200 cert_rotation.go:140] Starting client certificate rotation controller
	I0828 17:14:28.796712   29200 addons.go:234] Setting addon default-storageclass=true in "ha-240486"
	I0828 17:14:28.796745   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:14:28.796992   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:28.797023   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:28.811222   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38339
	I0828 17:14:28.811260   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44965
	I0828 17:14:28.811638   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:28.811655   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:28.812105   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:28.812120   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:28.812136   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:28.812162   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:28.812509   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:28.812516   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:28.812697   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:14:28.813066   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:28.813095   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:28.814562   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:28.816906   29200 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 17:14:28.818305   29200 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 17:14:28.818326   29200 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 17:14:28.818343   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:28.821529   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:28.821960   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:28.821993   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:28.822158   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:28.822365   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:28.822513   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:28.822663   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:28.827776   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0828 17:14:28.828157   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:28.828558   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:28.828580   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:28.828869   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:28.829066   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:14:28.830642   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:28.830828   29200 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 17:14:28.830841   29200 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 17:14:28.830853   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:28.833514   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:28.833871   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:28.833900   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:28.833991   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:28.834154   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:28.834243   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:28.834365   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:28.985091   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0828 17:14:29.043973   29200 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 17:14:29.044367   29200 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 17:14:29.656257   29200 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0828 17:14:29.825339   29200 main.go:141] libmachine: Making call to close driver server
	I0828 17:14:29.825362   29200 main.go:141] libmachine: (ha-240486) Calling .Close
	I0828 17:14:29.825475   29200 main.go:141] libmachine: Making call to close driver server
	I0828 17:14:29.825497   29200 main.go:141] libmachine: (ha-240486) Calling .Close
	I0828 17:14:29.825690   29200 main.go:141] libmachine: (ha-240486) DBG | Closing plugin on server side
	I0828 17:14:29.825713   29200 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:14:29.825726   29200 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:14:29.825739   29200 main.go:141] libmachine: Making call to close driver server
	I0828 17:14:29.825771   29200 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:14:29.825789   29200 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:14:29.825803   29200 main.go:141] libmachine: Making call to close driver server
	I0828 17:14:29.825816   29200 main.go:141] libmachine: (ha-240486) Calling .Close
	I0828 17:14:29.825859   29200 main.go:141] libmachine: (ha-240486) Calling .Close
	I0828 17:14:29.826052   29200 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:14:29.826065   29200 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:14:29.826277   29200 main.go:141] libmachine: (ha-240486) DBG | Closing plugin on server side
	I0828 17:14:29.826302   29200 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:14:29.826332   29200 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:14:29.826429   29200 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0828 17:14:29.826449   29200 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0828 17:14:29.826561   29200 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0828 17:14:29.826571   29200 round_trippers.go:469] Request Headers:
	I0828 17:14:29.826582   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:14:29.826595   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:14:29.837409   29200 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0828 17:14:29.839580   29200 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0828 17:14:29.839600   29200 round_trippers.go:469] Request Headers:
	I0828 17:14:29.839611   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:14:29.839617   29200 round_trippers.go:473]     Content-Type: application/json
	I0828 17:14:29.839621   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:14:29.842812   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:14:29.842985   29200 main.go:141] libmachine: Making call to close driver server
	I0828 17:14:29.843004   29200 main.go:141] libmachine: (ha-240486) Calling .Close
	I0828 17:14:29.843253   29200 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:14:29.843272   29200 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:14:29.845064   29200 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0828 17:14:29.846438   29200 addons.go:510] duration metric: took 1.069792822s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0828 17:14:29.846474   29200 start.go:246] waiting for cluster config update ...
	I0828 17:14:29.846489   29200 start.go:255] writing updated cluster config ...
	I0828 17:14:29.848495   29200 out.go:201] 
	I0828 17:14:29.850555   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:14:29.850650   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:14:29.852115   29200 out.go:177] * Starting "ha-240486-m02" control-plane node in "ha-240486" cluster
	I0828 17:14:29.853234   29200 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:14:29.853251   29200 cache.go:56] Caching tarball of preloaded images
	I0828 17:14:29.853338   29200 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 17:14:29.853356   29200 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 17:14:29.853422   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:14:29.853569   29200 start.go:360] acquireMachinesLock for ha-240486-m02: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 17:14:29.853609   29200 start.go:364] duration metric: took 22.687µs to acquireMachinesLock for "ha-240486-m02"
	I0828 17:14:29.853627   29200 start.go:93] Provisioning new machine with config: &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:14:29.853695   29200 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0828 17:14:29.855387   29200 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 17:14:29.855464   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:29.855496   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:29.870016   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39189
	I0828 17:14:29.870414   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:29.870871   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:29.870896   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:29.871164   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:29.871372   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetMachineName
	I0828 17:14:29.871496   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:29.871632   29200 start.go:159] libmachine.API.Create for "ha-240486" (driver="kvm2")
	I0828 17:14:29.871662   29200 client.go:168] LocalClient.Create starting
	I0828 17:14:29.871698   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 17:14:29.871740   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:14:29.871761   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:14:29.871824   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 17:14:29.871866   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:14:29.871884   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:14:29.871909   29200 main.go:141] libmachine: Running pre-create checks...
	I0828 17:14:29.871921   29200 main.go:141] libmachine: (ha-240486-m02) Calling .PreCreateCheck
	I0828 17:14:29.872081   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetConfigRaw
	I0828 17:14:29.872436   29200 main.go:141] libmachine: Creating machine...
	I0828 17:14:29.872450   29200 main.go:141] libmachine: (ha-240486-m02) Calling .Create
	I0828 17:14:29.872570   29200 main.go:141] libmachine: (ha-240486-m02) Creating KVM machine...
	I0828 17:14:29.873897   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found existing default KVM network
	I0828 17:14:29.873988   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found existing private KVM network mk-ha-240486
	I0828 17:14:29.874197   29200 main.go:141] libmachine: (ha-240486-m02) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02 ...
	I0828 17:14:29.874225   29200 main.go:141] libmachine: (ha-240486-m02) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 17:14:29.874237   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:29.874151   29549 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:14:29.874289   29200 main.go:141] libmachine: (ha-240486-m02) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 17:14:30.101165   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:30.101014   29549 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa...
	I0828 17:14:30.262160   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:30.261990   29549 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/ha-240486-m02.rawdisk...
	I0828 17:14:30.262195   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Writing magic tar header
	I0828 17:14:30.262219   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Writing SSH key tar header
	I0828 17:14:30.262233   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:30.262132   29549 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02 ...
	I0828 17:14:30.262248   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02
	I0828 17:14:30.262263   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 17:14:30.262278   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:14:30.262292   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02 (perms=drwx------)
	I0828 17:14:30.262309   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 17:14:30.262324   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 17:14:30.262335   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 17:14:30.262350   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 17:14:30.262361   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins
	I0828 17:14:30.262374   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 17:14:30.262387   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home
	I0828 17:14:30.262401   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 17:14:30.262421   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 17:14:30.262433   29200 main.go:141] libmachine: (ha-240486-m02) Creating domain...
	I0828 17:14:30.262470   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Skipping /home - not owner
	I0828 17:14:30.263327   29200 main.go:141] libmachine: (ha-240486-m02) define libvirt domain using xml: 
	I0828 17:14:30.263346   29200 main.go:141] libmachine: (ha-240486-m02) <domain type='kvm'>
	I0828 17:14:30.263362   29200 main.go:141] libmachine: (ha-240486-m02)   <name>ha-240486-m02</name>
	I0828 17:14:30.263371   29200 main.go:141] libmachine: (ha-240486-m02)   <memory unit='MiB'>2200</memory>
	I0828 17:14:30.263383   29200 main.go:141] libmachine: (ha-240486-m02)   <vcpu>2</vcpu>
	I0828 17:14:30.263392   29200 main.go:141] libmachine: (ha-240486-m02)   <features>
	I0828 17:14:30.263401   29200 main.go:141] libmachine: (ha-240486-m02)     <acpi/>
	I0828 17:14:30.263409   29200 main.go:141] libmachine: (ha-240486-m02)     <apic/>
	I0828 17:14:30.263435   29200 main.go:141] libmachine: (ha-240486-m02)     <pae/>
	I0828 17:14:30.263456   29200 main.go:141] libmachine: (ha-240486-m02)     
	I0828 17:14:30.263470   29200 main.go:141] libmachine: (ha-240486-m02)   </features>
	I0828 17:14:30.263482   29200 main.go:141] libmachine: (ha-240486-m02)   <cpu mode='host-passthrough'>
	I0828 17:14:30.263494   29200 main.go:141] libmachine: (ha-240486-m02)   
	I0828 17:14:30.263507   29200 main.go:141] libmachine: (ha-240486-m02)   </cpu>
	I0828 17:14:30.263517   29200 main.go:141] libmachine: (ha-240486-m02)   <os>
	I0828 17:14:30.263522   29200 main.go:141] libmachine: (ha-240486-m02)     <type>hvm</type>
	I0828 17:14:30.263528   29200 main.go:141] libmachine: (ha-240486-m02)     <boot dev='cdrom'/>
	I0828 17:14:30.263534   29200 main.go:141] libmachine: (ha-240486-m02)     <boot dev='hd'/>
	I0828 17:14:30.263541   29200 main.go:141] libmachine: (ha-240486-m02)     <bootmenu enable='no'/>
	I0828 17:14:30.263547   29200 main.go:141] libmachine: (ha-240486-m02)   </os>
	I0828 17:14:30.263552   29200 main.go:141] libmachine: (ha-240486-m02)   <devices>
	I0828 17:14:30.263560   29200 main.go:141] libmachine: (ha-240486-m02)     <disk type='file' device='cdrom'>
	I0828 17:14:30.263577   29200 main.go:141] libmachine: (ha-240486-m02)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/boot2docker.iso'/>
	I0828 17:14:30.263592   29200 main.go:141] libmachine: (ha-240486-m02)       <target dev='hdc' bus='scsi'/>
	I0828 17:14:30.263603   29200 main.go:141] libmachine: (ha-240486-m02)       <readonly/>
	I0828 17:14:30.263615   29200 main.go:141] libmachine: (ha-240486-m02)     </disk>
	I0828 17:14:30.263626   29200 main.go:141] libmachine: (ha-240486-m02)     <disk type='file' device='disk'>
	I0828 17:14:30.263634   29200 main.go:141] libmachine: (ha-240486-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 17:14:30.263642   29200 main.go:141] libmachine: (ha-240486-m02)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/ha-240486-m02.rawdisk'/>
	I0828 17:14:30.263653   29200 main.go:141] libmachine: (ha-240486-m02)       <target dev='hda' bus='virtio'/>
	I0828 17:14:30.263678   29200 main.go:141] libmachine: (ha-240486-m02)     </disk>
	I0828 17:14:30.263700   29200 main.go:141] libmachine: (ha-240486-m02)     <interface type='network'>
	I0828 17:14:30.263711   29200 main.go:141] libmachine: (ha-240486-m02)       <source network='mk-ha-240486'/>
	I0828 17:14:30.263727   29200 main.go:141] libmachine: (ha-240486-m02)       <model type='virtio'/>
	I0828 17:14:30.263740   29200 main.go:141] libmachine: (ha-240486-m02)     </interface>
	I0828 17:14:30.263751   29200 main.go:141] libmachine: (ha-240486-m02)     <interface type='network'>
	I0828 17:14:30.263761   29200 main.go:141] libmachine: (ha-240486-m02)       <source network='default'/>
	I0828 17:14:30.263771   29200 main.go:141] libmachine: (ha-240486-m02)       <model type='virtio'/>
	I0828 17:14:30.263783   29200 main.go:141] libmachine: (ha-240486-m02)     </interface>
	I0828 17:14:30.263791   29200 main.go:141] libmachine: (ha-240486-m02)     <serial type='pty'>
	I0828 17:14:30.263803   29200 main.go:141] libmachine: (ha-240486-m02)       <target port='0'/>
	I0828 17:14:30.263816   29200 main.go:141] libmachine: (ha-240486-m02)     </serial>
	I0828 17:14:30.263822   29200 main.go:141] libmachine: (ha-240486-m02)     <console type='pty'>
	I0828 17:14:30.263829   29200 main.go:141] libmachine: (ha-240486-m02)       <target type='serial' port='0'/>
	I0828 17:14:30.263835   29200 main.go:141] libmachine: (ha-240486-m02)     </console>
	I0828 17:14:30.263842   29200 main.go:141] libmachine: (ha-240486-m02)     <rng model='virtio'>
	I0828 17:14:30.263848   29200 main.go:141] libmachine: (ha-240486-m02)       <backend model='random'>/dev/random</backend>
	I0828 17:14:30.263854   29200 main.go:141] libmachine: (ha-240486-m02)     </rng>
	I0828 17:14:30.263860   29200 main.go:141] libmachine: (ha-240486-m02)     
	I0828 17:14:30.263866   29200 main.go:141] libmachine: (ha-240486-m02)     
	I0828 17:14:30.263872   29200 main.go:141] libmachine: (ha-240486-m02)   </devices>
	I0828 17:14:30.263887   29200 main.go:141] libmachine: (ha-240486-m02) </domain>
	I0828 17:14:30.263897   29200 main.go:141] libmachine: (ha-240486-m02) 
	I0828 17:14:30.270633   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:e3:56:d7 in network default
	I0828 17:14:30.271175   29200 main.go:141] libmachine: (ha-240486-m02) Ensuring networks are active...
	I0828 17:14:30.271197   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:30.271932   29200 main.go:141] libmachine: (ha-240486-m02) Ensuring network default is active
	I0828 17:14:30.272289   29200 main.go:141] libmachine: (ha-240486-m02) Ensuring network mk-ha-240486 is active
	I0828 17:14:30.272742   29200 main.go:141] libmachine: (ha-240486-m02) Getting domain xml...
	I0828 17:14:30.273403   29200 main.go:141] libmachine: (ha-240486-m02) Creating domain...
	I0828 17:14:31.496045   29200 main.go:141] libmachine: (ha-240486-m02) Waiting to get IP...
	I0828 17:14:31.496823   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:31.497228   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:31.497280   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:31.497238   29549 retry.go:31] will retry after 309.330553ms: waiting for machine to come up
	I0828 17:14:31.808741   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:31.809684   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:31.809716   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:31.809619   29549 retry.go:31] will retry after 389.919333ms: waiting for machine to come up
	I0828 17:14:32.201158   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:32.201509   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:32.201534   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:32.201463   29549 retry.go:31] will retry after 376.365916ms: waiting for machine to come up
	I0828 17:14:32.579039   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:32.579501   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:32.579529   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:32.579449   29549 retry.go:31] will retry after 501.696482ms: waiting for machine to come up
	I0828 17:14:33.083410   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:33.083919   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:33.083948   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:33.083848   29549 retry.go:31] will retry after 704.393424ms: waiting for machine to come up
	I0828 17:14:33.789221   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:33.789613   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:33.789640   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:33.789571   29549 retry.go:31] will retry after 921.016003ms: waiting for machine to come up
	I0828 17:14:34.712190   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:34.712613   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:34.712646   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:34.712569   29549 retry.go:31] will retry after 810.327503ms: waiting for machine to come up
	I0828 17:14:35.524860   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:35.525335   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:35.525372   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:35.525317   29549 retry.go:31] will retry after 1.133731078s: waiting for machine to come up
	I0828 17:14:36.660577   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:36.660936   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:36.660956   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:36.660914   29549 retry.go:31] will retry after 1.611562831s: waiting for machine to come up
	I0828 17:14:38.273523   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:38.273917   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:38.273946   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:38.273869   29549 retry.go:31] will retry after 1.957592324s: waiting for machine to come up
	I0828 17:14:40.233439   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:40.233821   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:40.233850   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:40.233764   29549 retry.go:31] will retry after 2.876473022s: waiting for machine to come up
	I0828 17:14:43.113682   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:43.114056   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:43.114095   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:43.114018   29549 retry.go:31] will retry after 3.170561273s: waiting for machine to come up
	I0828 17:14:46.286603   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:46.286998   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:46.287026   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:46.286944   29549 retry.go:31] will retry after 2.886461612s: waiting for machine to come up
	I0828 17:14:49.176848   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.177265   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has current primary IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.177289   29200 main.go:141] libmachine: (ha-240486-m02) Found IP for machine: 192.168.39.103
	I0828 17:14:49.177302   29200 main.go:141] libmachine: (ha-240486-m02) Reserving static IP address...
	I0828 17:14:49.177626   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find host DHCP lease matching {name: "ha-240486-m02", mac: "52:54:00:b3:68:04", ip: "192.168.39.103"} in network mk-ha-240486
	I0828 17:14:49.249548   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Getting to WaitForSSH function...
	I0828 17:14:49.249574   29200 main.go:141] libmachine: (ha-240486-m02) Reserved static IP address: 192.168.39.103
	I0828 17:14:49.249624   29200 main.go:141] libmachine: (ha-240486-m02) Waiting for SSH to be available...
	I0828 17:14:49.252243   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.252577   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.252599   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.252787   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Using SSH client type: external
	I0828 17:14:49.252813   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa (-rw-------)
	I0828 17:14:49.252842   29200 main.go:141] libmachine: (ha-240486-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 17:14:49.252856   29200 main.go:141] libmachine: (ha-240486-m02) DBG | About to run SSH command:
	I0828 17:14:49.252869   29200 main.go:141] libmachine: (ha-240486-m02) DBG | exit 0
	I0828 17:14:49.374056   29200 main.go:141] libmachine: (ha-240486-m02) DBG | SSH cmd err, output: <nil>: 
	I0828 17:14:49.374303   29200 main.go:141] libmachine: (ha-240486-m02) KVM machine creation complete!
	I0828 17:14:49.374645   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetConfigRaw
	I0828 17:14:49.375205   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:49.375408   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:49.375553   29200 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 17:14:49.375569   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:14:49.376919   29200 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 17:14:49.376932   29200 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 17:14:49.376938   29200 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 17:14:49.376944   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.379123   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.379507   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.379528   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.379716   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:49.379902   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.380068   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.380220   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:49.380366   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:49.380557   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:49.380578   29200 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 17:14:49.477473   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:14:49.477494   29200 main.go:141] libmachine: Detecting the provisioner...
	I0828 17:14:49.477502   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.480089   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.480492   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.480526   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.480654   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:49.480810   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.480981   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.481112   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:49.481252   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:49.481456   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:49.481468   29200 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 17:14:49.578647   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 17:14:49.578743   29200 main.go:141] libmachine: found compatible host: buildroot
	I0828 17:14:49.578758   29200 main.go:141] libmachine: Provisioning with buildroot...
	I0828 17:14:49.578765   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetMachineName
	I0828 17:14:49.579044   29200 buildroot.go:166] provisioning hostname "ha-240486-m02"
	I0828 17:14:49.579075   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetMachineName
	I0828 17:14:49.579259   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.582053   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.582427   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.582457   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.582642   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:49.582814   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.583003   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.583159   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:49.583329   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:49.583547   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:49.583565   29200 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-240486-m02 && echo "ha-240486-m02" | sudo tee /etc/hostname
	I0828 17:14:49.697767   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-240486-m02
	
	I0828 17:14:49.697794   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.700421   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.700827   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.700852   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.701086   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:49.701272   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.701445   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.701571   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:49.701724   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:49.701919   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:49.701937   29200 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-240486-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-240486-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-240486-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:14:49.806362   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:14:49.806390   29200 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 17:14:49.806427   29200 buildroot.go:174] setting up certificates
	I0828 17:14:49.806443   29200 provision.go:84] configureAuth start
	I0828 17:14:49.806463   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetMachineName
	I0828 17:14:49.806764   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:14:49.809479   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.809830   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.809855   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.809989   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.812004   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.812271   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.812299   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.812468   29200 provision.go:143] copyHostCerts
	I0828 17:14:49.812499   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:14:49.812535   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 17:14:49.812547   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:14:49.812625   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 17:14:49.812715   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:14:49.812740   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 17:14:49.812750   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:14:49.812785   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 17:14:49.812846   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:14:49.812870   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 17:14:49.812879   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:14:49.812913   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 17:14:49.812982   29200 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.ha-240486-m02 san=[127.0.0.1 192.168.39.103 ha-240486-m02 localhost minikube]
	I0828 17:14:49.888543   29200 provision.go:177] copyRemoteCerts
	I0828 17:14:49.888600   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:14:49.888627   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.891270   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.891563   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.891589   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.891757   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:49.891982   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.892131   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:49.892264   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	I0828 17:14:49.971726   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0828 17:14:49.971806   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0828 17:14:49.994849   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0828 17:14:49.994921   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 17:14:50.017522   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0828 17:14:50.017586   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 17:14:50.040308   29200 provision.go:87] duration metric: took 233.852237ms to configureAuth
	I0828 17:14:50.040355   29200 buildroot.go:189] setting minikube options for container-runtime
	I0828 17:14:50.040511   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:14:50.040580   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:50.043078   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.043411   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.043442   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.043617   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:50.043806   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.043961   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.044124   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:50.044252   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:50.044397   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:50.044411   29200 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 17:14:50.265971   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 17:14:50.266003   29200 main.go:141] libmachine: Checking connection to Docker...
	I0828 17:14:50.266013   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetURL
	I0828 17:14:50.267289   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Using libvirt version 6000000
	I0828 17:14:50.269548   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.269866   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.269891   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.270040   29200 main.go:141] libmachine: Docker is up and running!
	I0828 17:14:50.270054   29200 main.go:141] libmachine: Reticulating splines...
	I0828 17:14:50.270061   29200 client.go:171] duration metric: took 20.398388754s to LocalClient.Create
	I0828 17:14:50.270102   29200 start.go:167] duration metric: took 20.398462834s to libmachine.API.Create "ha-240486"
	I0828 17:14:50.270115   29200 start.go:293] postStartSetup for "ha-240486-m02" (driver="kvm2")
	I0828 17:14:50.270128   29200 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:14:50.270151   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:50.270420   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:14:50.270440   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:50.272619   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.272961   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.272985   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.273124   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:50.273308   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.273457   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:50.273591   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	I0828 17:14:50.353365   29200 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:14:50.358483   29200 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 17:14:50.358512   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 17:14:50.358581   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 17:14:50.358650   29200 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 17:14:50.358663   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /etc/ssl/certs/175282.pem
	I0828 17:14:50.358745   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 17:14:50.368139   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:14:50.392845   29200 start.go:296] duration metric: took 122.714343ms for postStartSetup
	I0828 17:14:50.392906   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetConfigRaw
	I0828 17:14:50.393528   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:14:50.396383   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.396750   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.396763   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.397003   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:14:50.397241   29200 start.go:128] duration metric: took 20.543534853s to createHost
	I0828 17:14:50.397265   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:50.399877   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.400199   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.400219   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.400426   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:50.400627   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.400783   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.400895   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:50.401030   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:50.401234   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:50.401246   29200 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 17:14:50.498646   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724865290.473915256
	
	I0828 17:14:50.498666   29200 fix.go:216] guest clock: 1724865290.473915256
	I0828 17:14:50.498674   29200 fix.go:229] Guest: 2024-08-28 17:14:50.473915256 +0000 UTC Remote: 2024-08-28 17:14:50.397255079 +0000 UTC m=+62.169751704 (delta=76.660177ms)
	I0828 17:14:50.498689   29200 fix.go:200] guest clock delta is within tolerance: 76.660177ms
	I0828 17:14:50.498694   29200 start.go:83] releasing machines lock for "ha-240486-m02", held for 20.645075428s
	I0828 17:14:50.498710   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:50.499024   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:14:50.501564   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.501988   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.502012   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.504433   29200 out.go:177] * Found network options:
	I0828 17:14:50.505883   29200 out.go:177]   - NO_PROXY=192.168.39.227
	W0828 17:14:50.507380   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	I0828 17:14:50.507416   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:50.508049   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:50.508257   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:50.508363   29200 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:14:50.508401   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	W0828 17:14:50.508522   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	I0828 17:14:50.508613   29200 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 17:14:50.508649   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:50.511197   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.511474   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.511545   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.511574   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.511716   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:50.511881   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.511961   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.511992   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.512047   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:50.512148   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:50.512222   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	I0828 17:14:50.512325   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.512476   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:50.512636   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	I0828 17:14:50.743801   29200 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 17:14:50.749218   29200 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 17:14:50.749299   29200 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:14:50.765791   29200 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 17:14:50.765815   29200 start.go:495] detecting cgroup driver to use...
	I0828 17:14:50.765888   29200 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 17:14:50.782925   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 17:14:50.797403   29200 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:14:50.797462   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:14:50.812777   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:14:50.827620   29200 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:14:50.952895   29200 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:14:51.085964   29200 docker.go:233] disabling docker service ...
	I0828 17:14:51.086038   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:14:51.100646   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:14:51.114372   29200 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:14:51.258433   29200 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:14:51.378426   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:14:51.392132   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:14:51.412693   29200 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 17:14:51.412752   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.423135   29200 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 17:14:51.423185   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.433375   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.442857   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.452289   29200 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:14:51.462037   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.471401   29200 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.487553   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.497005   29200 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:14:51.505597   29200 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 17:14:51.505659   29200 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 17:14:51.516933   29200 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:14:51.526099   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:14:51.632890   29200 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 17:14:51.727935   29200 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 17:14:51.728018   29200 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 17:14:51.732611   29200 start.go:563] Will wait 60s for crictl version
	I0828 17:14:51.732669   29200 ssh_runner.go:195] Run: which crictl
	I0828 17:14:51.736097   29200 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:14:51.779358   29200 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 17:14:51.779446   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:14:51.809785   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:14:51.840021   29200 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 17:14:51.841344   29200 out.go:177]   - env NO_PROXY=192.168.39.227
	I0828 17:14:51.842489   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:14:51.844988   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:51.845341   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:51.845374   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:51.845616   29200 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 17:14:51.849640   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:14:51.861969   29200 mustload.go:65] Loading cluster: ha-240486
	I0828 17:14:51.862200   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:14:51.862455   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:51.862497   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:51.877690   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46281
	I0828 17:14:51.878221   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:51.878718   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:51.878738   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:51.879035   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:51.879176   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:14:51.880797   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:14:51.881079   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:51.881111   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:51.896279   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0828 17:14:51.896673   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:51.897118   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:51.897139   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:51.897401   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:51.897562   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:51.897738   29200 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486 for IP: 192.168.39.103
	I0828 17:14:51.897748   29200 certs.go:194] generating shared ca certs ...
	I0828 17:14:51.897761   29200 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:51.897883   29200 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 17:14:51.897924   29200 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 17:14:51.897933   29200 certs.go:256] generating profile certs ...
	I0828 17:14:51.897995   29200 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key
	I0828 17:14:51.898021   29200 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.fbdfaf22
	I0828 17:14:51.898033   29200 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.fbdfaf22 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.103 192.168.39.254]
	I0828 17:14:52.005029   29200 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.fbdfaf22 ...
	I0828 17:14:52.005054   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.fbdfaf22: {Name:mk01885375cad3d22fa2b18a0913731209d0f7f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:52.005236   29200 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.fbdfaf22 ...
	I0828 17:14:52.005253   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.fbdfaf22: {Name:mk1cf0bdd411116af52d270493dcf45381853faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:52.005348   29200 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.fbdfaf22 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt
	I0828 17:14:52.005474   29200 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.fbdfaf22 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key
	I0828 17:14:52.005592   29200 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key
	I0828 17:14:52.005606   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0828 17:14:52.005625   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0828 17:14:52.005637   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0828 17:14:52.005654   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0828 17:14:52.005666   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0828 17:14:52.005679   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0828 17:14:52.005689   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0828 17:14:52.005700   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0828 17:14:52.005742   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 17:14:52.005773   29200 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 17:14:52.005783   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:14:52.005802   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 17:14:52.005822   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:14:52.005843   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 17:14:52.005878   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:14:52.005907   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:52.005920   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem -> /usr/share/ca-certificates/17528.pem
	I0828 17:14:52.005932   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /usr/share/ca-certificates/175282.pem
	I0828 17:14:52.005962   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:52.008703   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:52.009075   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:52.009100   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:52.009248   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:52.009520   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:52.009666   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:52.009780   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:52.082432   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0828 17:14:52.087332   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0828 17:14:52.099798   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0828 17:14:52.104152   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0828 17:14:52.114144   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0828 17:14:52.117845   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0828 17:14:52.128105   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0828 17:14:52.132117   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0828 17:14:52.142213   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0828 17:14:52.145897   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0828 17:14:52.155902   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0828 17:14:52.159787   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0828 17:14:52.169474   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:14:52.192908   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 17:14:52.215638   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:14:52.238192   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:14:52.259513   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0828 17:14:52.280759   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 17:14:52.301862   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:14:52.323166   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 17:14:52.345050   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:14:52.366042   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 17:14:52.387082   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 17:14:52.408024   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0828 17:14:52.422663   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0828 17:14:52.438110   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0828 17:14:52.453761   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0828 17:14:52.468421   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0828 17:14:52.483087   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0828 17:14:52.497802   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0828 17:14:52.512440   29200 ssh_runner.go:195] Run: openssl version
	I0828 17:14:52.517816   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:14:52.527439   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:52.531473   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:52.531520   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:52.536779   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:14:52.546308   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 17:14:52.556450   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 17:14:52.560503   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:14:52.560553   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 17:14:52.565872   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 17:14:52.575912   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 17:14:52.585733   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 17:14:52.589882   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:14:52.589937   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 17:14:52.595134   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 17:14:52.605198   29200 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:14:52.608898   29200 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 17:14:52.608951   29200 kubeadm.go:934] updating node {m02 192.168.39.103 8443 v1.31.0 crio true true} ...
	I0828 17:14:52.609036   29200 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-240486-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:14:52.609068   29200 kube-vip.go:115] generating kube-vip config ...
	I0828 17:14:52.609101   29200 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0828 17:14:52.625208   29200 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0828 17:14:52.625278   29200 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0828 17:14:52.625334   29200 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 17:14:52.634543   29200 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0828 17:14:52.634606   29200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0828 17:14:52.643784   29200 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0828 17:14:52.643870   29200 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0828 17:14:52.643784   29200 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0828 17:14:52.643920   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0828 17:14:52.644010   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0828 17:14:52.648261   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0828 17:14:52.648287   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0828 17:14:53.549787   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0828 17:14:53.549866   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0828 17:14:53.554526   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0828 17:14:53.554565   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0828 17:14:53.765178   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:14:53.800403   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0828 17:14:53.800500   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0828 17:14:53.805267   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0828 17:14:53.805300   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0828 17:14:54.117733   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0828 17:14:54.126890   29200 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0828 17:14:54.144348   29200 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:14:54.161479   29200 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0828 17:14:54.178463   29200 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0828 17:14:54.182442   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:14:54.193912   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:14:54.317990   29200 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:14:54.335631   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:14:54.336129   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:54.336196   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:54.351508   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I0828 17:14:54.351940   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:54.352400   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:54.352425   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:54.352721   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:54.352908   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:54.353031   29200 start.go:317] joinCluster: &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:14:54.353140   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0828 17:14:54.353158   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:54.356321   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:54.356770   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:54.356809   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:54.357067   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:54.357278   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:54.357451   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:54.357615   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:54.501288   29200 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:14:54.501351   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t7dffj.lbnbcon9dz7sdvz7 --discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-240486-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443"
	I0828 17:15:16.425395   29200 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t7dffj.lbnbcon9dz7sdvz7 --discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-240486-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443": (21.924015345s)
	I0828 17:15:16.425446   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0828 17:15:16.983058   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-240486-m02 minikube.k8s.io/updated_at=2024_08_28T17_15_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=ha-240486 minikube.k8s.io/primary=false
	I0828 17:15:17.092321   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-240486-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0828 17:15:17.195990   29200 start.go:319] duration metric: took 22.842954145s to joinCluster
	I0828 17:15:17.196065   29200 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:15:17.196355   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:15:17.198043   29200 out.go:177] * Verifying Kubernetes components...
	I0828 17:15:17.199594   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:15:17.486580   29200 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:15:17.512850   29200 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:15:17.513151   29200 kapi.go:59] client config for ha-240486: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt", KeyFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key", CAFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0828 17:15:17.513218   29200 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.227:8443
	I0828 17:15:17.513492   29200 node_ready.go:35] waiting up to 6m0s for node "ha-240486-m02" to be "Ready" ...
	I0828 17:15:17.513599   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:17.513612   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:17.513625   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:17.513630   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:17.521769   29200 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0828 17:15:18.013712   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:18.013746   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:18.013757   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:18.013763   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:18.019484   29200 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0828 17:15:18.514511   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:18.514532   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:18.514541   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:18.514545   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:18.518170   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:19.014635   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:19.014659   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:19.014670   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:19.014677   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:19.018729   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:19.514200   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:19.514223   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:19.514232   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:19.514236   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:19.517526   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:19.518126   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:20.013686   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:20.013711   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:20.013722   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:20.013728   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:20.016710   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:20.513675   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:20.513696   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:20.513708   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:20.513712   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:20.517212   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:21.014172   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:21.014195   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:21.014206   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:21.014210   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:21.018697   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:21.514475   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:21.514512   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:21.514522   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:21.514526   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:21.517844   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:21.518569   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:22.013738   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:22.013758   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:22.013767   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:22.013773   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:22.017406   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:22.514535   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:22.514557   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:22.514569   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:22.514577   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:22.518450   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:23.014476   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:23.014496   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:23.014504   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:23.014508   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:23.017523   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:23.514148   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:23.514167   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:23.514176   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:23.514180   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:23.517401   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:24.014492   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:24.014523   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:24.014535   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:24.014542   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:24.018750   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:24.019202   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:24.513722   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:24.513743   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:24.513751   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:24.513755   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:24.517023   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:25.014439   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:25.014465   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:25.014477   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:25.014482   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:25.018254   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:25.514350   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:25.514387   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:25.514399   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:25.514404   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:25.517422   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:26.014394   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:26.014415   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:26.014424   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:26.014429   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:26.017966   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:26.513970   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:26.513991   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:26.514000   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:26.514004   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:26.517078   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:26.517601   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:27.013898   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:27.013924   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:27.013934   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:27.013940   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:27.017053   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:27.514328   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:27.514356   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:27.514366   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:27.514370   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:27.517810   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:28.013713   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:28.013744   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:28.013751   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:28.013754   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:28.017200   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:28.513829   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:28.513855   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:28.513864   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:28.513870   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:28.516923   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:29.014101   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:29.014122   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:29.014135   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:29.014142   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:29.018063   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:29.018503   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:29.514406   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:29.514427   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:29.514435   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:29.514439   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:29.517872   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:30.014067   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:30.014117   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:30.014128   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:30.014134   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:30.017275   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:30.514359   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:30.514380   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:30.514388   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:30.514392   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:30.517774   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:31.014691   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:31.014717   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:31.014727   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:31.014732   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:31.018008   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:31.018707   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:31.514127   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:31.514153   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:31.514161   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:31.514165   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:31.517215   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:32.014130   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:32.014151   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:32.014160   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:32.014163   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:32.017006   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:32.513774   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:32.513798   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:32.513808   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:32.513812   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:32.516896   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:33.013811   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:33.013831   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:33.013841   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:33.013847   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:33.017335   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:33.513769   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:33.513793   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:33.513802   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:33.513808   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:33.517313   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:33.518322   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:34.014654   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:34.014681   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:34.014692   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:34.014697   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:34.017648   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:34.513955   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:34.513978   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:34.513986   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:34.513990   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:34.517349   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:35.013994   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:35.014015   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.014023   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.014029   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.017889   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:35.513780   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:35.513809   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.513820   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.513826   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.517176   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:35.517743   29200 node_ready.go:49] node "ha-240486-m02" has status "Ready":"True"
	I0828 17:15:35.517764   29200 node_ready.go:38] duration metric: took 18.004247806s for node "ha-240486-m02" to be "Ready" ...
	I0828 17:15:35.517776   29200 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:15:35.517861   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:15:35.517874   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.517884   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.517892   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.522041   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:35.528302   29200 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.528407   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wtzml
	I0828 17:15:35.528419   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.528429   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.528438   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.531301   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.531817   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:35.531832   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.531842   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.531845   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.534272   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.534770   29200 pod_ready.go:93] pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:35.534787   29200 pod_ready.go:82] duration metric: took 6.459017ms for pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.534798   29200 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.534855   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-x562s
	I0828 17:15:35.534865   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.534875   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.534881   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.537216   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.537796   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:35.537810   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.537819   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.537824   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.539925   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.540396   29200 pod_ready.go:93] pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:35.540411   29200 pod_ready.go:82] duration metric: took 5.606327ms for pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.540423   29200 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.540474   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486
	I0828 17:15:35.540484   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.540493   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.540499   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.542473   29200 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0828 17:15:35.543096   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:35.543110   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.543120   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.543126   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.545555   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.546008   29200 pod_ready.go:93] pod "etcd-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:35.546027   29200 pod_ready.go:82] duration metric: took 5.597148ms for pod "etcd-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.546040   29200 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.546124   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486-m02
	I0828 17:15:35.546134   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.546146   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.546153   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.548354   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.548765   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:35.548777   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.548786   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.548793   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.550863   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.551191   29200 pod_ready.go:93] pod "etcd-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:35.551208   29200 pod_ready.go:82] duration metric: took 5.159072ms for pod "etcd-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.551227   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.714632   29200 request.go:632] Waited for 163.332307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486
	I0828 17:15:35.714691   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486
	I0828 17:15:35.714696   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.714704   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.714709   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.717592   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.914761   29200 request.go:632] Waited for 196.371747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:35.914830   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:35.914836   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.914843   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.914848   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.918114   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:35.918688   29200 pod_ready.go:93] pod "kube-apiserver-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:35.918715   29200 pod_ready.go:82] duration metric: took 367.477955ms for pod "kube-apiserver-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.918726   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:36.114627   29200 request.go:632] Waited for 195.832233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m02
	I0828 17:15:36.114705   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m02
	I0828 17:15:36.114714   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:36.114723   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:36.114731   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:36.118296   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:36.314232   29200 request.go:632] Waited for 195.315551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:36.314331   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:36.314343   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:36.314354   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:36.314362   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:36.317346   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:36.317892   29200 pod_ready.go:93] pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:36.317911   29200 pod_ready.go:82] duration metric: took 399.178304ms for pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:36.317920   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:36.513909   29200 request.go:632] Waited for 195.926987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486
	I0828 17:15:36.513997   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486
	I0828 17:15:36.514005   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:36.514014   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:36.514019   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:36.517299   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:36.714390   29200 request.go:632] Waited for 196.373231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:36.714449   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:36.714454   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:36.714461   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:36.714467   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:36.717562   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:36.718206   29200 pod_ready.go:93] pod "kube-controller-manager-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:36.718226   29200 pod_ready.go:82] duration metric: took 400.299823ms for pod "kube-controller-manager-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:36.718237   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:36.914216   29200 request.go:632] Waited for 195.906561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m02
	I0828 17:15:36.914311   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m02
	I0828 17:15:36.914318   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:36.914327   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:36.914332   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:36.917884   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:37.113954   29200 request.go:632] Waited for 195.316279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:37.114023   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:37.114029   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:37.114037   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:37.114046   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:37.117354   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:37.117848   29200 pod_ready.go:93] pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:37.117872   29200 pod_ready.go:82] duration metric: took 399.623871ms for pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:37.117883   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4w7tt" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:37.313868   29200 request.go:632] Waited for 195.919913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4w7tt
	I0828 17:15:37.313937   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4w7tt
	I0828 17:15:37.313944   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:37.313952   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:37.313956   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:37.317638   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:37.514816   29200 request.go:632] Waited for 196.395024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:37.514869   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:37.514874   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:37.514882   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:37.514886   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:37.517803   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:37.518391   29200 pod_ready.go:93] pod "kube-proxy-4w7tt" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:37.518411   29200 pod_ready.go:82] duration metric: took 400.517615ms for pod "kube-proxy-4w7tt" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:37.518423   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jdnzs" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:37.714550   29200 request.go:632] Waited for 196.06408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jdnzs
	I0828 17:15:37.714626   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jdnzs
	I0828 17:15:37.714639   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:37.714649   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:37.714661   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:37.717959   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:37.913905   29200 request.go:632] Waited for 195.331101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:37.914060   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:37.914091   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:37.914104   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:37.914115   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:37.917242   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:37.917835   29200 pod_ready.go:93] pod "kube-proxy-jdnzs" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:37.917856   29200 pod_ready.go:82] duration metric: took 399.42415ms for pod "kube-proxy-jdnzs" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:37.917869   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:38.114814   29200 request.go:632] Waited for 196.863834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486
	I0828 17:15:38.114870   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486
	I0828 17:15:38.114875   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.114884   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.114887   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.118355   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:38.314373   29200 request.go:632] Waited for 195.36618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:38.314442   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:38.314449   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.314458   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.314465   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.317738   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:38.318357   29200 pod_ready.go:93] pod "kube-scheduler-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:38.318378   29200 pod_ready.go:82] duration metric: took 400.500122ms for pod "kube-scheduler-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:38.318393   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:38.513886   29200 request.go:632] Waited for 195.419271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m02
	I0828 17:15:38.513976   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m02
	I0828 17:15:38.513987   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.513999   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.514007   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.517316   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:38.714274   29200 request.go:632] Waited for 196.387122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:38.714331   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:38.714336   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.714370   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.714380   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.717742   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:38.718408   29200 pod_ready.go:93] pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:38.718428   29200 pod_ready.go:82] duration metric: took 400.024956ms for pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:38.718439   29200 pod_ready.go:39] duration metric: took 3.200648757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:15:38.718454   29200 api_server.go:52] waiting for apiserver process to appear ...
	I0828 17:15:38.718502   29200 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:15:38.732926   29200 api_server.go:72] duration metric: took 21.536827363s to wait for apiserver process to appear ...
	I0828 17:15:38.732949   29200 api_server.go:88] waiting for apiserver healthz status ...
	I0828 17:15:38.732966   29200 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0828 17:15:38.737997   29200 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0828 17:15:38.738055   29200 round_trippers.go:463] GET https://192.168.39.227:8443/version
	I0828 17:15:38.738060   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.738068   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.738071   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.739313   29200 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0828 17:15:38.739410   29200 api_server.go:141] control plane version: v1.31.0
	I0828 17:15:38.739426   29200 api_server.go:131] duration metric: took 6.471345ms to wait for apiserver health ...
	I0828 17:15:38.739434   29200 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 17:15:38.914808   29200 request.go:632] Waited for 175.291341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:15:38.914885   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:15:38.914893   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.914904   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.914916   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.919370   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:38.923886   29200 system_pods.go:59] 17 kube-system pods found
	I0828 17:15:38.923914   29200 system_pods.go:61] "coredns-6f6b679f8f-wtzml" [424f87f7-0221-432d-a04f-8f276386be98] Running
	I0828 17:15:38.923922   29200 system_pods.go:61] "coredns-6f6b679f8f-x562s" [78fab040-ae1a-425e-9dc5-e10594b84b9f] Running
	I0828 17:15:38.923926   29200 system_pods.go:61] "etcd-ha-240486" [8a6cf9e2-f806-44ae-b6ef-2a522dc2f516] Running
	I0828 17:15:38.923929   29200 system_pods.go:61] "etcd-ha-240486-m02" [2053f850-310f-46b3-b3d0-a2dbcf97dd70] Running
	I0828 17:15:38.923932   29200 system_pods.go:61] "kindnet-pb8m7" [67180991-ca3a-4cfb-ba43-919c64d68657] Running
	I0828 17:15:38.923936   29200 system_pods.go:61] "kindnet-q9q9q" [2915b192-297e-4d73-802a-37660942c8c1] Running
	I0828 17:15:38.923940   29200 system_pods.go:61] "kube-apiserver-ha-240486" [e2c0b6cc-87e7-4ae4-823f-c51b100d056d] Running
	I0828 17:15:38.923943   29200 system_pods.go:61] "kube-apiserver-ha-240486-m02" [ead49a23-e0f0-4f8f-b327-6cd1d648ff65] Running
	I0828 17:15:38.923951   29200 system_pods.go:61] "kube-controller-manager-ha-240486" [1b0f6cba-56b3-4e54-b3fc-d5dba431f647] Running
	I0828 17:15:38.923955   29200 system_pods.go:61] "kube-controller-manager-ha-240486-m02" [20c49f1a-4f3d-4ed1-bca3-7efa53c61e4e] Running
	I0828 17:15:38.923958   29200 system_pods.go:61] "kube-proxy-4w7tt" [5188f77d-e0ea-4e42-a5c4-173a8d7680dd] Running
	I0828 17:15:38.923962   29200 system_pods.go:61] "kube-proxy-jdnzs" [9c500e4d-bea4-4389-aca7-ebf805f2e642] Running
	I0828 17:15:38.923966   29200 system_pods.go:61] "kube-scheduler-ha-240486" [ca5398d3-c263-4a18-9f9e-554bf50bf7d4] Running
	I0828 17:15:38.923970   29200 system_pods.go:61] "kube-scheduler-ha-240486-m02" [030ee5b8-449b-48ed-aaf4-ff4afeb8cae2] Running
	I0828 17:15:38.923975   29200 system_pods.go:61] "kube-vip-ha-240486" [f1caf9b0-cb2f-462f-be58-ee158739bb79] Running
	I0828 17:15:38.923982   29200 system_pods.go:61] "kube-vip-ha-240486-m02" [909bf826-9c16-458a-8721-9e9ddc2eda22] Running
	I0828 17:15:38.923987   29200 system_pods.go:61] "storage-provisioner" [83a920cf-9505-4ae6-bd10-2582b38ee29b] Running
	I0828 17:15:38.923997   29200 system_pods.go:74] duration metric: took 184.5575ms to wait for pod list to return data ...
	I0828 17:15:38.924007   29200 default_sa.go:34] waiting for default service account to be created ...
	I0828 17:15:39.114465   29200 request.go:632] Waited for 190.380314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0828 17:15:39.114518   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0828 17:15:39.114523   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:39.114530   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:39.114533   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:39.118624   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:39.118837   29200 default_sa.go:45] found service account: "default"
	I0828 17:15:39.118852   29200 default_sa.go:55] duration metric: took 194.838823ms for default service account to be created ...
	I0828 17:15:39.118860   29200 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 17:15:39.314371   29200 request.go:632] Waited for 195.426211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:15:39.314426   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:15:39.314431   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:39.314439   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:39.314443   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:39.319280   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:39.323575   29200 system_pods.go:86] 17 kube-system pods found
	I0828 17:15:39.323613   29200 system_pods.go:89] "coredns-6f6b679f8f-wtzml" [424f87f7-0221-432d-a04f-8f276386be98] Running
	I0828 17:15:39.323621   29200 system_pods.go:89] "coredns-6f6b679f8f-x562s" [78fab040-ae1a-425e-9dc5-e10594b84b9f] Running
	I0828 17:15:39.323629   29200 system_pods.go:89] "etcd-ha-240486" [8a6cf9e2-f806-44ae-b6ef-2a522dc2f516] Running
	I0828 17:15:39.323636   29200 system_pods.go:89] "etcd-ha-240486-m02" [2053f850-310f-46b3-b3d0-a2dbcf97dd70] Running
	I0828 17:15:39.323642   29200 system_pods.go:89] "kindnet-pb8m7" [67180991-ca3a-4cfb-ba43-919c64d68657] Running
	I0828 17:15:39.323649   29200 system_pods.go:89] "kindnet-q9q9q" [2915b192-297e-4d73-802a-37660942c8c1] Running
	I0828 17:15:39.323656   29200 system_pods.go:89] "kube-apiserver-ha-240486" [e2c0b6cc-87e7-4ae4-823f-c51b100d056d] Running
	I0828 17:15:39.323664   29200 system_pods.go:89] "kube-apiserver-ha-240486-m02" [ead49a23-e0f0-4f8f-b327-6cd1d648ff65] Running
	I0828 17:15:39.323676   29200 system_pods.go:89] "kube-controller-manager-ha-240486" [1b0f6cba-56b3-4e54-b3fc-d5dba431f647] Running
	I0828 17:15:39.323681   29200 system_pods.go:89] "kube-controller-manager-ha-240486-m02" [20c49f1a-4f3d-4ed1-bca3-7efa53c61e4e] Running
	I0828 17:15:39.323689   29200 system_pods.go:89] "kube-proxy-4w7tt" [5188f77d-e0ea-4e42-a5c4-173a8d7680dd] Running
	I0828 17:15:39.323694   29200 system_pods.go:89] "kube-proxy-jdnzs" [9c500e4d-bea4-4389-aca7-ebf805f2e642] Running
	I0828 17:15:39.323700   29200 system_pods.go:89] "kube-scheduler-ha-240486" [ca5398d3-c263-4a18-9f9e-554bf50bf7d4] Running
	I0828 17:15:39.323704   29200 system_pods.go:89] "kube-scheduler-ha-240486-m02" [030ee5b8-449b-48ed-aaf4-ff4afeb8cae2] Running
	I0828 17:15:39.323712   29200 system_pods.go:89] "kube-vip-ha-240486" [f1caf9b0-cb2f-462f-be58-ee158739bb79] Running
	I0828 17:15:39.323715   29200 system_pods.go:89] "kube-vip-ha-240486-m02" [909bf826-9c16-458a-8721-9e9ddc2eda22] Running
	I0828 17:15:39.323722   29200 system_pods.go:89] "storage-provisioner" [83a920cf-9505-4ae6-bd10-2582b38ee29b] Running
	I0828 17:15:39.323732   29200 system_pods.go:126] duration metric: took 204.865856ms to wait for k8s-apps to be running ...
	I0828 17:15:39.323744   29200 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 17:15:39.323790   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:15:39.338729   29200 system_svc.go:56] duration metric: took 14.979047ms WaitForService to wait for kubelet
	I0828 17:15:39.338759   29200 kubeadm.go:582] duration metric: took 22.14266206s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:15:39.338784   29200 node_conditions.go:102] verifying NodePressure condition ...
	I0828 17:15:39.514507   29200 request.go:632] Waited for 175.626696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes
	I0828 17:15:39.514569   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes
	I0828 17:15:39.514578   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:39.514590   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:39.514600   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:39.518204   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:39.519150   29200 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 17:15:39.519181   29200 node_conditions.go:123] node cpu capacity is 2
	I0828 17:15:39.519196   29200 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 17:15:39.519202   29200 node_conditions.go:123] node cpu capacity is 2
	I0828 17:15:39.519211   29200 node_conditions.go:105] duration metric: took 180.421268ms to run NodePressure ...
	I0828 17:15:39.519228   29200 start.go:241] waiting for startup goroutines ...
	I0828 17:15:39.519259   29200 start.go:255] writing updated cluster config ...
	I0828 17:15:39.521387   29200 out.go:201] 
	I0828 17:15:39.522752   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:15:39.522874   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:15:39.524455   29200 out.go:177] * Starting "ha-240486-m03" control-plane node in "ha-240486" cluster
	I0828 17:15:39.525471   29200 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:15:39.525487   29200 cache.go:56] Caching tarball of preloaded images
	I0828 17:15:39.525565   29200 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 17:15:39.525575   29200 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 17:15:39.525652   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:15:39.525805   29200 start.go:360] acquireMachinesLock for ha-240486-m03: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 17:15:39.525843   29200 start.go:364] duration metric: took 20.835µs to acquireMachinesLock for "ha-240486-m03"
	I0828 17:15:39.525860   29200 start.go:93] Provisioning new machine with config: &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:15:39.525943   29200 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0828 17:15:39.527450   29200 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 17:15:39.527538   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:15:39.527571   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:15:39.542314   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0828 17:15:39.542721   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:15:39.543151   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:15:39.543171   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:15:39.543458   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:15:39.543607   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetMachineName
	I0828 17:15:39.543779   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:15:39.543915   29200 start.go:159] libmachine.API.Create for "ha-240486" (driver="kvm2")
	I0828 17:15:39.543937   29200 client.go:168] LocalClient.Create starting
	I0828 17:15:39.543965   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 17:15:39.543996   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:15:39.544010   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:15:39.544056   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 17:15:39.544074   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:15:39.544092   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:15:39.544107   29200 main.go:141] libmachine: Running pre-create checks...
	I0828 17:15:39.544115   29200 main.go:141] libmachine: (ha-240486-m03) Calling .PreCreateCheck
	I0828 17:15:39.544273   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetConfigRaw
	I0828 17:15:39.544646   29200 main.go:141] libmachine: Creating machine...
	I0828 17:15:39.544660   29200 main.go:141] libmachine: (ha-240486-m03) Calling .Create
	I0828 17:15:39.544798   29200 main.go:141] libmachine: (ha-240486-m03) Creating KVM machine...
	I0828 17:15:39.545885   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found existing default KVM network
	I0828 17:15:39.546000   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found existing private KVM network mk-ha-240486
	I0828 17:15:39.546135   29200 main.go:141] libmachine: (ha-240486-m03) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03 ...
	I0828 17:15:39.546179   29200 main.go:141] libmachine: (ha-240486-m03) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 17:15:39.546331   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:39.546127   29930 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:15:39.546383   29200 main.go:141] libmachine: (ha-240486-m03) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 17:15:39.769872   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:39.769729   29930 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa...
	I0828 17:15:39.921729   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:39.921586   29930 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/ha-240486-m03.rawdisk...
	I0828 17:15:39.921767   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Writing magic tar header
	I0828 17:15:39.921781   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Writing SSH key tar header
	I0828 17:15:39.921792   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:39.921737   29930 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03 ...
	I0828 17:15:39.921931   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03 (perms=drwx------)
	I0828 17:15:39.921960   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 17:15:39.921974   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03
	I0828 17:15:39.921992   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 17:15:39.922001   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:15:39.922011   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 17:15:39.922019   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 17:15:39.922025   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins
	I0828 17:15:39.922031   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home
	I0828 17:15:39.922061   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Skipping /home - not owner
	I0828 17:15:39.922110   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 17:15:39.922131   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 17:15:39.922146   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 17:15:39.922163   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 17:15:39.922174   29200 main.go:141] libmachine: (ha-240486-m03) Creating domain...
	I0828 17:15:39.923082   29200 main.go:141] libmachine: (ha-240486-m03) define libvirt domain using xml: 
	I0828 17:15:39.923104   29200 main.go:141] libmachine: (ha-240486-m03) <domain type='kvm'>
	I0828 17:15:39.923115   29200 main.go:141] libmachine: (ha-240486-m03)   <name>ha-240486-m03</name>
	I0828 17:15:39.923127   29200 main.go:141] libmachine: (ha-240486-m03)   <memory unit='MiB'>2200</memory>
	I0828 17:15:39.923139   29200 main.go:141] libmachine: (ha-240486-m03)   <vcpu>2</vcpu>
	I0828 17:15:39.923147   29200 main.go:141] libmachine: (ha-240486-m03)   <features>
	I0828 17:15:39.923178   29200 main.go:141] libmachine: (ha-240486-m03)     <acpi/>
	I0828 17:15:39.923203   29200 main.go:141] libmachine: (ha-240486-m03)     <apic/>
	I0828 17:15:39.923215   29200 main.go:141] libmachine: (ha-240486-m03)     <pae/>
	I0828 17:15:39.923226   29200 main.go:141] libmachine: (ha-240486-m03)     
	I0828 17:15:39.923235   29200 main.go:141] libmachine: (ha-240486-m03)   </features>
	I0828 17:15:39.923245   29200 main.go:141] libmachine: (ha-240486-m03)   <cpu mode='host-passthrough'>
	I0828 17:15:39.923254   29200 main.go:141] libmachine: (ha-240486-m03)   
	I0828 17:15:39.923263   29200 main.go:141] libmachine: (ha-240486-m03)   </cpu>
	I0828 17:15:39.923274   29200 main.go:141] libmachine: (ha-240486-m03)   <os>
	I0828 17:15:39.923284   29200 main.go:141] libmachine: (ha-240486-m03)     <type>hvm</type>
	I0828 17:15:39.923292   29200 main.go:141] libmachine: (ha-240486-m03)     <boot dev='cdrom'/>
	I0828 17:15:39.923302   29200 main.go:141] libmachine: (ha-240486-m03)     <boot dev='hd'/>
	I0828 17:15:39.923311   29200 main.go:141] libmachine: (ha-240486-m03)     <bootmenu enable='no'/>
	I0828 17:15:39.923319   29200 main.go:141] libmachine: (ha-240486-m03)   </os>
	I0828 17:15:39.923330   29200 main.go:141] libmachine: (ha-240486-m03)   <devices>
	I0828 17:15:39.923341   29200 main.go:141] libmachine: (ha-240486-m03)     <disk type='file' device='cdrom'>
	I0828 17:15:39.923357   29200 main.go:141] libmachine: (ha-240486-m03)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/boot2docker.iso'/>
	I0828 17:15:39.923371   29200 main.go:141] libmachine: (ha-240486-m03)       <target dev='hdc' bus='scsi'/>
	I0828 17:15:39.923403   29200 main.go:141] libmachine: (ha-240486-m03)       <readonly/>
	I0828 17:15:39.923425   29200 main.go:141] libmachine: (ha-240486-m03)     </disk>
	I0828 17:15:39.923441   29200 main.go:141] libmachine: (ha-240486-m03)     <disk type='file' device='disk'>
	I0828 17:15:39.923455   29200 main.go:141] libmachine: (ha-240486-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 17:15:39.923473   29200 main.go:141] libmachine: (ha-240486-m03)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/ha-240486-m03.rawdisk'/>
	I0828 17:15:39.923483   29200 main.go:141] libmachine: (ha-240486-m03)       <target dev='hda' bus='virtio'/>
	I0828 17:15:39.923494   29200 main.go:141] libmachine: (ha-240486-m03)     </disk>
	I0828 17:15:39.923506   29200 main.go:141] libmachine: (ha-240486-m03)     <interface type='network'>
	I0828 17:15:39.923534   29200 main.go:141] libmachine: (ha-240486-m03)       <source network='mk-ha-240486'/>
	I0828 17:15:39.923554   29200 main.go:141] libmachine: (ha-240486-m03)       <model type='virtio'/>
	I0828 17:15:39.923565   29200 main.go:141] libmachine: (ha-240486-m03)     </interface>
	I0828 17:15:39.923576   29200 main.go:141] libmachine: (ha-240486-m03)     <interface type='network'>
	I0828 17:15:39.923590   29200 main.go:141] libmachine: (ha-240486-m03)       <source network='default'/>
	I0828 17:15:39.923601   29200 main.go:141] libmachine: (ha-240486-m03)       <model type='virtio'/>
	I0828 17:15:39.923611   29200 main.go:141] libmachine: (ha-240486-m03)     </interface>
	I0828 17:15:39.923621   29200 main.go:141] libmachine: (ha-240486-m03)     <serial type='pty'>
	I0828 17:15:39.923631   29200 main.go:141] libmachine: (ha-240486-m03)       <target port='0'/>
	I0828 17:15:39.923645   29200 main.go:141] libmachine: (ha-240486-m03)     </serial>
	I0828 17:15:39.923679   29200 main.go:141] libmachine: (ha-240486-m03)     <console type='pty'>
	I0828 17:15:39.923698   29200 main.go:141] libmachine: (ha-240486-m03)       <target type='serial' port='0'/>
	I0828 17:15:39.923711   29200 main.go:141] libmachine: (ha-240486-m03)     </console>
	I0828 17:15:39.923725   29200 main.go:141] libmachine: (ha-240486-m03)     <rng model='virtio'>
	I0828 17:15:39.923734   29200 main.go:141] libmachine: (ha-240486-m03)       <backend model='random'>/dev/random</backend>
	I0828 17:15:39.923740   29200 main.go:141] libmachine: (ha-240486-m03)     </rng>
	I0828 17:15:39.923746   29200 main.go:141] libmachine: (ha-240486-m03)     
	I0828 17:15:39.923752   29200 main.go:141] libmachine: (ha-240486-m03)     
	I0828 17:15:39.923770   29200 main.go:141] libmachine: (ha-240486-m03)   </devices>
	I0828 17:15:39.923789   29200 main.go:141] libmachine: (ha-240486-m03) </domain>
	I0828 17:15:39.923799   29200 main.go:141] libmachine: (ha-240486-m03) 
	I0828 17:15:39.930273   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:e8:20:89 in network default
	I0828 17:15:39.930747   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:39.930764   29200 main.go:141] libmachine: (ha-240486-m03) Ensuring networks are active...
	I0828 17:15:39.931428   29200 main.go:141] libmachine: (ha-240486-m03) Ensuring network default is active
	I0828 17:15:39.931658   29200 main.go:141] libmachine: (ha-240486-m03) Ensuring network mk-ha-240486 is active
	I0828 17:15:39.932000   29200 main.go:141] libmachine: (ha-240486-m03) Getting domain xml...
	I0828 17:15:39.932671   29200 main.go:141] libmachine: (ha-240486-m03) Creating domain...
	I0828 17:15:41.172014   29200 main.go:141] libmachine: (ha-240486-m03) Waiting to get IP...
	I0828 17:15:41.172734   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:41.173147   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:41.173196   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:41.173152   29930 retry.go:31] will retry after 227.598083ms: waiting for machine to come up
	I0828 17:15:41.402806   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:41.403278   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:41.403306   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:41.403240   29930 retry.go:31] will retry after 249.890746ms: waiting for machine to come up
	I0828 17:15:41.656028   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:41.656449   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:41.656467   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:41.656412   29930 retry.go:31] will retry after 456.580621ms: waiting for machine to come up
	I0828 17:15:42.114765   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:42.115241   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:42.115274   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:42.115192   29930 retry.go:31] will retry after 420.923136ms: waiting for machine to come up
	I0828 17:15:42.537966   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:42.538404   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:42.538460   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:42.538356   29930 retry.go:31] will retry after 728.870515ms: waiting for machine to come up
	I0828 17:15:43.268293   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:43.268676   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:43.268704   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:43.268630   29930 retry.go:31] will retry after 802.680619ms: waiting for machine to come up
	I0828 17:15:44.072482   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:44.072962   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:44.072991   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:44.072907   29930 retry.go:31] will retry after 1.076312326s: waiting for machine to come up
	I0828 17:15:45.150919   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:45.151447   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:45.151478   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:45.151406   29930 retry.go:31] will retry after 1.105111399s: waiting for machine to come up
	I0828 17:15:46.258745   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:46.259186   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:46.259210   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:46.259153   29930 retry.go:31] will retry after 1.521636059s: waiting for machine to come up
	I0828 17:15:47.782743   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:47.783150   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:47.783175   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:47.783106   29930 retry.go:31] will retry after 2.061034215s: waiting for machine to come up
	I0828 17:15:49.846879   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:49.847359   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:49.847398   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:49.847316   29930 retry.go:31] will retry after 2.417689828s: waiting for machine to come up
	I0828 17:15:52.267103   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:52.267504   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:52.267529   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:52.267452   29930 retry.go:31] will retry after 2.531691934s: waiting for machine to come up
	I0828 17:15:54.800110   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:54.800491   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:54.800518   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:54.800451   29930 retry.go:31] will retry after 3.301665009s: waiting for machine to come up
	I0828 17:15:58.103319   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:58.103797   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:58.103827   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:58.103739   29930 retry.go:31] will retry after 4.773578468s: waiting for machine to come up
	I0828 17:16:02.881367   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:02.881716   29200 main.go:141] libmachine: (ha-240486-m03) Found IP for machine: 192.168.39.28
	I0828 17:16:02.881742   29200 main.go:141] libmachine: (ha-240486-m03) Reserving static IP address...
	I0828 17:16:02.881759   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has current primary IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:02.882039   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find host DHCP lease matching {name: "ha-240486-m03", mac: "52:54:00:2e:b2:44", ip: "192.168.39.28"} in network mk-ha-240486
	I0828 17:16:02.954847   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Getting to WaitForSSH function...
	I0828 17:16:02.954879   29200 main.go:141] libmachine: (ha-240486-m03) Reserved static IP address: 192.168.39.28
	I0828 17:16:02.954892   29200 main.go:141] libmachine: (ha-240486-m03) Waiting for SSH to be available...
	I0828 17:16:02.957270   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:02.957635   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486
	I0828 17:16:02.957663   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find defined IP address of network mk-ha-240486 interface with MAC address 52:54:00:2e:b2:44
	I0828 17:16:02.957816   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Using SSH client type: external
	I0828 17:16:02.957844   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa (-rw-------)
	I0828 17:16:02.957887   29200 main.go:141] libmachine: (ha-240486-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 17:16:02.957909   29200 main.go:141] libmachine: (ha-240486-m03) DBG | About to run SSH command:
	I0828 17:16:02.957927   29200 main.go:141] libmachine: (ha-240486-m03) DBG | exit 0
	I0828 17:16:02.962359   29200 main.go:141] libmachine: (ha-240486-m03) DBG | SSH cmd err, output: exit status 255: 
	I0828 17:16:02.962386   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0828 17:16:02.962395   29200 main.go:141] libmachine: (ha-240486-m03) DBG | command : exit 0
	I0828 17:16:02.962404   29200 main.go:141] libmachine: (ha-240486-m03) DBG | err     : exit status 255
	I0828 17:16:02.962412   29200 main.go:141] libmachine: (ha-240486-m03) DBG | output  : 
	I0828 17:16:05.963328   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Getting to WaitForSSH function...
	I0828 17:16:05.965582   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:05.965990   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:05.966018   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:05.966144   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Using SSH client type: external
	I0828 17:16:05.966167   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa (-rw-------)
	I0828 17:16:05.966227   29200 main.go:141] libmachine: (ha-240486-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 17:16:05.966259   29200 main.go:141] libmachine: (ha-240486-m03) DBG | About to run SSH command:
	I0828 17:16:05.966276   29200 main.go:141] libmachine: (ha-240486-m03) DBG | exit 0
	I0828 17:16:06.090307   29200 main.go:141] libmachine: (ha-240486-m03) DBG | SSH cmd err, output: <nil>: 
	I0828 17:16:06.090633   29200 main.go:141] libmachine: (ha-240486-m03) KVM machine creation complete!
	I0828 17:16:06.090884   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetConfigRaw
	I0828 17:16:06.091476   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:06.091736   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:06.091895   29200 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 17:16:06.091913   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetState
	I0828 17:16:06.093159   29200 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 17:16:06.093173   29200 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 17:16:06.093179   29200 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 17:16:06.093188   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.095269   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.095642   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.095670   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.095771   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.095940   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.096105   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.096258   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.096461   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:06.096735   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:06.096752   29200 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 17:16:06.197511   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:16:06.197538   29200 main.go:141] libmachine: Detecting the provisioner...
	I0828 17:16:06.197552   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.200467   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.200905   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.200934   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.201099   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.201280   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.201411   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.201583   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.201742   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:06.201946   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:06.201960   29200 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 17:16:06.310570   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 17:16:06.310643   29200 main.go:141] libmachine: found compatible host: buildroot
	I0828 17:16:06.310656   29200 main.go:141] libmachine: Provisioning with buildroot...
	I0828 17:16:06.310670   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetMachineName
	I0828 17:16:06.310918   29200 buildroot.go:166] provisioning hostname "ha-240486-m03"
	I0828 17:16:06.310941   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetMachineName
	I0828 17:16:06.311113   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.313515   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.313894   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.313919   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.314028   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.314231   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.314418   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.314621   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.314804   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:06.314959   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:06.314972   29200 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-240486-m03 && echo "ha-240486-m03" | sudo tee /etc/hostname
	I0828 17:16:06.431268   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-240486-m03
	
	I0828 17:16:06.431296   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.434406   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.434790   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.434824   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.435027   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.435226   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.435413   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.435564   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.435751   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:06.435920   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:06.435935   29200 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-240486-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-240486-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-240486-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:16:06.546579   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:16:06.546611   29200 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 17:16:06.546629   29200 buildroot.go:174] setting up certificates
	I0828 17:16:06.546639   29200 provision.go:84] configureAuth start
	I0828 17:16:06.546647   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetMachineName
	I0828 17:16:06.546913   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:16:06.549427   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.549904   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.549935   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.550116   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.552421   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.552770   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.552799   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.552909   29200 provision.go:143] copyHostCerts
	I0828 17:16:06.552942   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:16:06.552978   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 17:16:06.552987   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:16:06.553070   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 17:16:06.553168   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:16:06.553197   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 17:16:06.553207   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:16:06.553246   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 17:16:06.553295   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:16:06.553312   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 17:16:06.553318   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:16:06.553339   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 17:16:06.553397   29200 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.ha-240486-m03 san=[127.0.0.1 192.168.39.28 ha-240486-m03 localhost minikube]
	I0828 17:16:06.591711   29200 provision.go:177] copyRemoteCerts
	I0828 17:16:06.591761   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:16:06.591782   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.594451   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.594917   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.594957   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.595083   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.595305   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.595445   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.595594   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:16:06.676118   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0828 17:16:06.676193   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 17:16:06.698865   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0828 17:16:06.698950   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0828 17:16:06.721497   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0828 17:16:06.721559   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 17:16:06.743986   29200 provision.go:87] duration metric: took 197.335179ms to configureAuth
	I0828 17:16:06.744022   29200 buildroot.go:189] setting minikube options for container-runtime
	I0828 17:16:06.744263   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:16:06.744340   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.747225   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.747573   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.747603   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.747794   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.747997   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.748195   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.748372   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.748562   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:06.748745   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:06.748767   29200 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 17:16:06.964653   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 17:16:06.964678   29200 main.go:141] libmachine: Checking connection to Docker...
	I0828 17:16:06.964687   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetURL
	I0828 17:16:06.965854   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Using libvirt version 6000000
	I0828 17:16:06.967687   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.968051   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.968072   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.968223   29200 main.go:141] libmachine: Docker is up and running!
	I0828 17:16:06.968250   29200 main.go:141] libmachine: Reticulating splines...
	I0828 17:16:06.968256   29200 client.go:171] duration metric: took 27.424311592s to LocalClient.Create
	I0828 17:16:06.968278   29200 start.go:167] duration metric: took 27.424361459s to libmachine.API.Create "ha-240486"
	I0828 17:16:06.968291   29200 start.go:293] postStartSetup for "ha-240486-m03" (driver="kvm2")
	I0828 17:16:06.968305   29200 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:16:06.968331   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:06.968547   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:16:06.968576   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.970418   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.970723   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.970749   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.970870   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.971032   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.971150   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.971259   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:16:07.052135   29200 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:16:07.056138   29200 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 17:16:07.056163   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 17:16:07.056240   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 17:16:07.056335   29200 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 17:16:07.056347   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /etc/ssl/certs/175282.pem
	I0828 17:16:07.056461   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 17:16:07.066071   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:16:07.089057   29200 start.go:296] duration metric: took 120.749316ms for postStartSetup
	I0828 17:16:07.089098   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetConfigRaw
	I0828 17:16:07.089669   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:16:07.092079   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.092440   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:07.092469   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.092732   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:16:07.092949   29200 start.go:128] duration metric: took 27.566995404s to createHost
	I0828 17:16:07.092975   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:07.095233   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.095535   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:07.095580   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.095708   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:07.095903   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:07.096056   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:07.096205   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:07.096422   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:07.096632   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:07.096648   29200 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 17:16:07.198563   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724865367.179990749
	
	I0828 17:16:07.198592   29200 fix.go:216] guest clock: 1724865367.179990749
	I0828 17:16:07.198603   29200 fix.go:229] Guest: 2024-08-28 17:16:07.179990749 +0000 UTC Remote: 2024-08-28 17:16:07.092961015 +0000 UTC m=+138.865457633 (delta=87.029734ms)
	I0828 17:16:07.198622   29200 fix.go:200] guest clock delta is within tolerance: 87.029734ms
	I0828 17:16:07.198632   29200 start.go:83] releasing machines lock for "ha-240486-m03", held for 27.672780347s
	I0828 17:16:07.198652   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:07.198921   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:16:07.201767   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.202197   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:07.202231   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.204670   29200 out.go:177] * Found network options:
	I0828 17:16:07.205999   29200 out.go:177]   - NO_PROXY=192.168.39.227,192.168.39.103
	W0828 17:16:07.207467   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	W0828 17:16:07.207496   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	I0828 17:16:07.207514   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:07.208065   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:07.208264   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:07.208381   29200 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:16:07.208420   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	W0828 17:16:07.208456   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	W0828 17:16:07.208482   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	I0828 17:16:07.208545   29200 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 17:16:07.208566   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:07.211258   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.211504   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.211681   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:07.211710   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.211874   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:07.212071   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:07.212265   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:07.212398   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:07.212420   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.212461   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:16:07.212575   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:07.212714   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:07.212888   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:07.213024   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:16:07.478026   29200 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 17:16:07.483696   29200 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 17:16:07.483750   29200 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:16:07.505666   29200 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 17:16:07.505693   29200 start.go:495] detecting cgroup driver to use...
	I0828 17:16:07.505747   29200 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 17:16:07.522613   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 17:16:07.536542   29200 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:16:07.536609   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:16:07.550287   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:16:07.564020   29200 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:16:07.680205   29200 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:16:07.828463   29200 docker.go:233] disabling docker service ...
	I0828 17:16:07.828523   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:16:07.841867   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:16:07.854340   29200 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:16:07.987258   29200 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:16:08.095512   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:16:08.108828   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:16:08.125742   29200 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 17:16:08.125807   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.135295   29200 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 17:16:08.135363   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.144580   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.153785   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.163132   29200 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:16:08.176566   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.186664   29200 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.202268   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.212012   29200 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:16:08.220505   29200 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 17:16:08.220560   29200 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 17:16:08.233919   29200 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:16:08.243089   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:16:08.348646   29200 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 17:16:08.436411   29200 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 17:16:08.436489   29200 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 17:16:08.440859   29200 start.go:563] Will wait 60s for crictl version
	I0828 17:16:08.440918   29200 ssh_runner.go:195] Run: which crictl
	I0828 17:16:08.444665   29200 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:16:08.485561   29200 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 17:16:08.485636   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:16:08.512223   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:16:08.541846   29200 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 17:16:08.543241   29200 out.go:177]   - env NO_PROXY=192.168.39.227
	I0828 17:16:08.544487   29200 out.go:177]   - env NO_PROXY=192.168.39.227,192.168.39.103
	I0828 17:16:08.545568   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:16:08.548178   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:08.548583   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:08.548611   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:08.548795   29200 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 17:16:08.552944   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:16:08.565072   29200 mustload.go:65] Loading cluster: ha-240486
	I0828 17:16:08.565312   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:16:08.565625   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:16:08.565664   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:16:08.581402   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
	I0828 17:16:08.581843   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:16:08.582342   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:16:08.582370   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:16:08.582727   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:16:08.582912   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:16:08.584362   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:16:08.584649   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:16:08.584683   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:16:08.601185   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I0828 17:16:08.601556   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:16:08.601984   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:16:08.602004   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:16:08.602324   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:16:08.602512   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:16:08.602712   29200 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486 for IP: 192.168.39.28
	I0828 17:16:08.602725   29200 certs.go:194] generating shared ca certs ...
	I0828 17:16:08.602741   29200 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:16:08.602883   29200 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 17:16:08.602962   29200 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 17:16:08.602974   29200 certs.go:256] generating profile certs ...
	I0828 17:16:08.603069   29200 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key
	I0828 17:16:08.603100   29200 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.94f0a6b9
	I0828 17:16:08.603119   29200 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.94f0a6b9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.103 192.168.39.28 192.168.39.254]
	I0828 17:16:08.726654   29200 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.94f0a6b9 ...
	I0828 17:16:08.726683   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.94f0a6b9: {Name:mk7b521344b243403383813c675a0854fb8cab41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:16:08.726872   29200 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.94f0a6b9 ...
	I0828 17:16:08.726889   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.94f0a6b9: {Name:mk8d14edb46ee42a5ec5b7143c6e1b74d0a4bd2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:16:08.726980   29200 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.94f0a6b9 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt
	I0828 17:16:08.727154   29200 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.94f0a6b9 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key
	I0828 17:16:08.727337   29200 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key
	I0828 17:16:08.727356   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0828 17:16:08.727374   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0828 17:16:08.727400   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0828 17:16:08.727418   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0828 17:16:08.727435   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0828 17:16:08.727452   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0828 17:16:08.727469   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0828 17:16:08.727486   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0828 17:16:08.727552   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 17:16:08.727591   29200 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 17:16:08.727604   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:16:08.727645   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 17:16:08.727674   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:16:08.727705   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 17:16:08.727761   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:16:08.727795   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:16:08.727814   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem -> /usr/share/ca-certificates/17528.pem
	I0828 17:16:08.727833   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /usr/share/ca-certificates/175282.pem
	I0828 17:16:08.727871   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:16:08.730779   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:16:08.731196   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:16:08.731226   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:16:08.731361   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:16:08.731559   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:16:08.731728   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:16:08.731884   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:16:08.806441   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0828 17:16:08.811394   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0828 17:16:08.822312   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0828 17:16:08.826934   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0828 17:16:08.837874   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0828 17:16:08.841762   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0828 17:16:08.853116   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0828 17:16:08.857249   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0828 17:16:08.866997   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0828 17:16:08.870701   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0828 17:16:08.879828   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0828 17:16:08.883604   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0828 17:16:08.893126   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:16:08.917041   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 17:16:08.941523   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:16:08.963919   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:16:08.986263   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0828 17:16:09.009214   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 17:16:09.034619   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:16:09.059992   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 17:16:09.084963   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:16:09.109712   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 17:16:09.131789   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 17:16:09.153702   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0828 17:16:09.168749   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0828 17:16:09.184636   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0828 17:16:09.200091   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0828 17:16:09.215008   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0828 17:16:09.230529   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0828 17:16:09.246295   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0828 17:16:09.261861   29200 ssh_runner.go:195] Run: openssl version
	I0828 17:16:09.267139   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:16:09.276755   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:16:09.280738   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:16:09.280786   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:16:09.286057   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:16:09.295691   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 17:16:09.305444   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 17:16:09.309439   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:16:09.309495   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 17:16:09.314706   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 17:16:09.324354   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 17:16:09.334045   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 17:16:09.338635   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:16:09.338694   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 17:16:09.343970   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 17:16:09.353891   29200 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:16:09.357712   29200 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 17:16:09.357772   29200 kubeadm.go:934] updating node {m03 192.168.39.28 8443 v1.31.0 crio true true} ...
	I0828 17:16:09.357872   29200 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-240486-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:16:09.357909   29200 kube-vip.go:115] generating kube-vip config ...
	I0828 17:16:09.357960   29200 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0828 17:16:09.374847   29200 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0828 17:16:09.374907   29200 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0828 17:16:09.374958   29200 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 17:16:09.384037   29200 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0828 17:16:09.384089   29200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0828 17:16:09.392959   29200 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0828 17:16:09.392977   29200 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0828 17:16:09.392988   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0828 17:16:09.392996   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0828 17:16:09.392960   29200 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0828 17:16:09.393060   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0828 17:16:09.393049   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0828 17:16:09.393100   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:16:09.409197   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0828 17:16:09.409237   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0828 17:16:09.409260   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0828 17:16:09.409314   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0828 17:16:09.409335   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0828 17:16:09.409343   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0828 17:16:09.442382   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0828 17:16:09.442426   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0828 17:16:10.291909   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0828 17:16:10.302278   29200 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0828 17:16:10.319531   29200 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:16:10.336811   29200 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0828 17:16:10.353567   29200 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0828 17:16:10.357434   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:16:10.369450   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:16:10.477921   29200 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:16:10.493652   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:16:10.493999   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:16:10.494038   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:16:10.512191   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0828 17:16:10.512614   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:16:10.513055   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:16:10.513081   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:16:10.513416   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:16:10.513601   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:16:10.513758   29200 start.go:317] joinCluster: &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:16:10.513880   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0828 17:16:10.513897   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:16:10.516326   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:16:10.516806   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:16:10.516830   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:16:10.516997   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:16:10.517137   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:16:10.517271   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:16:10.517451   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:16:10.663120   29200 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:16:10.663172   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5l4itz.xeascawi8wyu6ziv --discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-240486-m03 --control-plane --apiserver-advertise-address=192.168.39.28 --apiserver-bind-port=8443"
	I0828 17:16:34.010919   29200 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5l4itz.xeascawi8wyu6ziv --discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-240486-m03 --control-plane --apiserver-advertise-address=192.168.39.28 --apiserver-bind-port=8443": (23.34771997s)
	I0828 17:16:34.010954   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0828 17:16:34.433957   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-240486-m03 minikube.k8s.io/updated_at=2024_08_28T17_16_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=ha-240486 minikube.k8s.io/primary=false
	I0828 17:16:34.596941   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-240486-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0828 17:16:34.718821   29200 start.go:319] duration metric: took 24.205058483s to joinCluster
	I0828 17:16:34.718905   29200 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:16:34.719248   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:16:34.720195   29200 out.go:177] * Verifying Kubernetes components...
	I0828 17:16:34.721391   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:16:34.929245   29200 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:16:34.947136   29200 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:16:34.947467   29200 kapi.go:59] client config for ha-240486: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt", KeyFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key", CAFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0828 17:16:34.947551   29200 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.227:8443
	I0828 17:16:34.947825   29200 node_ready.go:35] waiting up to 6m0s for node "ha-240486-m03" to be "Ready" ...
	I0828 17:16:34.947925   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:34.947936   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:34.947948   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:34.947960   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:34.951547   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:35.448825   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:35.448852   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:35.448864   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:35.448870   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:35.452064   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:35.948289   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:35.948311   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:35.948322   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:35.948326   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:35.951681   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:36.448834   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:36.448857   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:36.448866   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:36.448869   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:36.452315   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:36.948048   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:36.948071   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:36.948081   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:36.948087   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:36.951955   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:36.952483   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:37.448931   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:37.448953   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:37.448963   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:37.448970   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:37.452509   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:37.948330   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:37.948349   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:37.948359   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:37.948363   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:37.951947   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:38.448739   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:38.448768   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:38.448780   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:38.448785   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:38.451989   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:38.949001   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:38.949026   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:38.949036   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:38.949040   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:38.952828   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:38.953471   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:39.448839   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:39.448862   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:39.448872   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:39.448876   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:39.451920   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:39.948963   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:39.948999   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:39.949011   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:39.949016   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:39.952092   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:40.448115   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:40.448149   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:40.448166   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:40.448174   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:40.451580   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:40.948234   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:40.948258   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:40.948269   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:40.948275   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:40.951032   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:41.449101   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:41.449184   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:41.449200   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:41.449206   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:41.459709   29200 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0828 17:16:41.461843   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:41.948021   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:41.948046   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:41.948057   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:41.948063   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:41.951063   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:42.449029   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:42.449051   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:42.449060   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:42.449063   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:42.452173   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:42.949024   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:42.949045   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:42.949056   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:42.949065   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:42.953137   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:16:43.448737   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:43.448769   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:43.448779   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:43.448786   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:43.451797   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:43.948226   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:43.948249   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:43.948257   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:43.948261   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:43.951313   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:43.951973   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:44.448846   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:44.448870   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:44.448878   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:44.448883   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:44.451958   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:44.948937   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:44.948959   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:44.948967   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:44.948971   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:44.951966   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:45.449016   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:45.449041   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:45.449049   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:45.449052   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:45.452583   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:45.948773   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:45.948795   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:45.948804   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:45.948810   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:45.951869   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:45.952337   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:46.448804   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:46.448834   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:46.448846   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:46.448852   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:46.451969   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:46.948717   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:46.948742   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:46.948750   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:46.948754   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:46.953324   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:16:47.448254   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:47.448276   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:47.448289   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:47.448295   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:47.452156   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:47.948124   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:47.948148   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:47.948159   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:47.948165   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:47.952909   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:16:47.953459   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:48.448719   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:48.448740   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:48.448748   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:48.448752   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:48.452018   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:48.948015   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:48.948043   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:48.948052   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:48.948056   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:48.950826   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:49.448227   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:49.448250   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:49.448258   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:49.448262   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:49.451879   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:49.948800   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:49.948821   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:49.948829   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:49.948833   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:49.952050   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:50.448004   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:50.448024   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:50.448032   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:50.448038   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:50.451346   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:50.451909   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:50.948669   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:50.948695   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:50.948708   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:50.948715   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:50.952701   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:51.448003   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:51.448027   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:51.448035   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:51.448040   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:51.451036   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:51.948963   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:51.948983   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:51.948991   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:51.948994   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:51.951688   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.448612   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:52.448641   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.448654   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.448665   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.451662   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.452122   29200 node_ready.go:49] node "ha-240486-m03" has status "Ready":"True"
	I0828 17:16:52.452141   29200 node_ready.go:38] duration metric: took 17.504298399s for node "ha-240486-m03" to be "Ready" ...
	I0828 17:16:52.452151   29200 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:16:52.452216   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:16:52.452230   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.452240   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.452246   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.458514   29200 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0828 17:16:52.465149   29200 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.465243   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wtzml
	I0828 17:16:52.465255   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.465266   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.465271   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.467996   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.468617   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:52.468632   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.468639   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.468644   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.471395   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.471762   29200 pod_ready.go:93] pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:52.471779   29200 pod_ready.go:82] duration metric: took 6.604558ms for pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.471788   29200 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.471833   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-x562s
	I0828 17:16:52.471841   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.471847   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.471851   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.474021   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.474714   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:52.474727   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.474734   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.474738   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.476781   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.477183   29200 pod_ready.go:93] pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:52.477201   29200 pod_ready.go:82] duration metric: took 5.406335ms for pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.477214   29200 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.477266   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486
	I0828 17:16:52.477277   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.477287   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.477294   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.479394   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.479851   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:52.479863   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.479870   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.479873   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.481735   29200 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0828 17:16:52.482221   29200 pod_ready.go:93] pod "etcd-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:52.482237   29200 pod_ready.go:82] duration metric: took 5.01562ms for pod "etcd-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.482248   29200 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.482304   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486-m02
	I0828 17:16:52.482314   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.482324   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.482333   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.484876   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.485297   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:52.485312   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.485322   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.485327   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.487514   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.487912   29200 pod_ready.go:93] pod "etcd-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:52.487927   29200 pod_ready.go:82] duration metric: took 5.67224ms for pod "etcd-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.487934   29200 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.649329   29200 request.go:632] Waited for 161.343759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486-m03
	I0828 17:16:52.649421   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486-m03
	I0828 17:16:52.649433   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.649441   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.649447   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.652720   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:52.849578   29200 request.go:632] Waited for 196.340431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:52.849673   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:52.849680   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.849697   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.849704   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.853178   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:52.853893   29200 pod_ready.go:93] pod "etcd-ha-240486-m03" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:52.853915   29200 pod_ready.go:82] duration metric: took 365.973206ms for pod "etcd-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.853937   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:53.048932   29200 request.go:632] Waited for 194.927532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486
	I0828 17:16:53.049007   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486
	I0828 17:16:53.049013   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:53.049021   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:53.049030   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:53.052313   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:53.249340   29200 request.go:632] Waited for 196.380576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:53.249433   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:53.249439   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:53.249449   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:53.249458   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:53.253418   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:53.254118   29200 pod_ready.go:93] pod "kube-apiserver-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:53.254137   29200 pod_ready.go:82] duration metric: took 400.191683ms for pod "kube-apiserver-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:53.254150   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:53.448718   29200 request.go:632] Waited for 194.496513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m02
	I0828 17:16:53.448773   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m02
	I0828 17:16:53.448778   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:53.448785   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:53.448789   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:53.452092   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:53.648618   29200 request.go:632] Waited for 195.775747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:53.648716   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:53.648728   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:53.648738   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:53.648742   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:53.652212   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:53.652756   29200 pod_ready.go:93] pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:53.652774   29200 pod_ready.go:82] duration metric: took 398.616132ms for pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:53.652786   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:53.848927   29200 request.go:632] Waited for 196.04388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m03
	I0828 17:16:53.848989   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m03
	I0828 17:16:53.848996   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:53.849006   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:53.849017   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:53.852769   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.048790   29200 request.go:632] Waited for 195.282477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:54.048874   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:54.048883   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:54.048891   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:54.048896   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:54.052238   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.053003   29200 pod_ready.go:93] pod "kube-apiserver-ha-240486-m03" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:54.053024   29200 pod_ready.go:82] duration metric: took 400.227358ms for pod "kube-apiserver-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:54.053037   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:54.249140   29200 request.go:632] Waited for 196.038014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486
	I0828 17:16:54.249209   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486
	I0828 17:16:54.249216   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:54.249224   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:54.249236   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:54.252312   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.449412   29200 request.go:632] Waited for 196.369336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:54.449483   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:54.449488   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:54.449495   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:54.449499   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:54.452556   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.452976   29200 pod_ready.go:93] pod "kube-controller-manager-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:54.452994   29200 pod_ready.go:82] duration metric: took 399.949839ms for pod "kube-controller-manager-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:54.453003   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:54.649574   29200 request.go:632] Waited for 196.481532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m02
	I0828 17:16:54.649640   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m02
	I0828 17:16:54.649646   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:54.649654   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:54.649658   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:54.653202   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.849043   29200 request.go:632] Waited for 195.224597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:54.849092   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:54.849097   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:54.849108   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:54.849113   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:54.852286   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.852964   29200 pod_ready.go:93] pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:54.852987   29200 pod_ready.go:82] duration metric: took 399.974077ms for pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:54.853002   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:55.049516   29200 request.go:632] Waited for 196.439033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m03
	I0828 17:16:55.049570   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m03
	I0828 17:16:55.049575   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:55.049582   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:55.049588   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:55.052994   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:55.249029   29200 request.go:632] Waited for 195.36517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:55.249100   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:55.249108   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:55.249120   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:55.249127   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:55.252059   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:55.252757   29200 pod_ready.go:93] pod "kube-controller-manager-ha-240486-m03" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:55.252776   29200 pod_ready.go:82] duration metric: took 399.764707ms for pod "kube-controller-manager-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:55.252790   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4w7tt" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:55.448810   29200 request.go:632] Waited for 195.952202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4w7tt
	I0828 17:16:55.448891   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4w7tt
	I0828 17:16:55.448897   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:55.448905   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:55.448910   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:55.452174   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:55.649253   29200 request.go:632] Waited for 196.378674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:55.649328   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:55.649336   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:55.649347   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:55.649370   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:55.652255   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:55.652849   29200 pod_ready.go:93] pod "kube-proxy-4w7tt" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:55.652865   29200 pod_ready.go:82] duration metric: took 400.068294ms for pod "kube-proxy-4w7tt" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:55.652874   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jdnzs" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:55.849052   29200 request.go:632] Waited for 196.115456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jdnzs
	I0828 17:16:55.849137   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jdnzs
	I0828 17:16:55.849146   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:55.849157   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:55.849163   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:55.852354   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.049440   29200 request.go:632] Waited for 196.352699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:56.049507   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:56.049512   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:56.049520   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:56.049525   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:56.052552   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.053050   29200 pod_ready.go:93] pod "kube-proxy-jdnzs" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:56.053068   29200 pod_ready.go:82] duration metric: took 400.187423ms for pod "kube-proxy-jdnzs" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:56.053081   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ktw9z" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:56.249138   29200 request.go:632] Waited for 195.985128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktw9z
	I0828 17:16:56.249229   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktw9z
	I0828 17:16:56.249240   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:56.249252   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:56.249263   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:56.252728   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.449673   29200 request.go:632] Waited for 196.397721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:56.449724   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:56.449729   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:56.449737   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:56.449742   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:56.452895   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.453443   29200 pod_ready.go:93] pod "kube-proxy-ktw9z" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:56.453460   29200 pod_ready.go:82] duration metric: took 400.371434ms for pod "kube-proxy-ktw9z" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:56.453468   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:56.649618   29200 request.go:632] Waited for 196.078175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486
	I0828 17:16:56.649671   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486
	I0828 17:16:56.649676   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:56.649686   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:56.649693   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:56.653175   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.848936   29200 request.go:632] Waited for 195.219368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:56.849028   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:56.849039   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:56.849047   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:56.849050   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:56.852177   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.852638   29200 pod_ready.go:93] pod "kube-scheduler-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:56.852660   29200 pod_ready.go:82] duration metric: took 399.184775ms for pod "kube-scheduler-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:56.852677   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:57.049552   29200 request.go:632] Waited for 196.789794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m02
	I0828 17:16:57.049607   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m02
	I0828 17:16:57.049620   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.049629   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.049633   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.052639   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:57.249603   29200 request.go:632] Waited for 196.390918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:57.249663   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:57.249669   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.249676   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.249680   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.252880   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:57.253381   29200 pod_ready.go:93] pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:57.253399   29200 pod_ready.go:82] duration metric: took 400.711283ms for pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:57.253408   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:57.449469   29200 request.go:632] Waited for 195.958076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m03
	I0828 17:16:57.449541   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m03
	I0828 17:16:57.449557   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.449569   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.449577   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.453113   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:57.649166   29200 request.go:632] Waited for 195.360322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:57.649218   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:57.649223   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.649231   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.649234   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.652459   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:57.652965   29200 pod_ready.go:93] pod "kube-scheduler-ha-240486-m03" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:57.652982   29200 pod_ready.go:82] duration metric: took 399.56894ms for pod "kube-scheduler-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:57.652993   29200 pod_ready.go:39] duration metric: took 5.20083003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:16:57.653006   29200 api_server.go:52] waiting for apiserver process to appear ...
	I0828 17:16:57.653056   29200 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:16:57.672165   29200 api_server.go:72] duration metric: took 22.953223062s to wait for apiserver process to appear ...
	I0828 17:16:57.672193   29200 api_server.go:88] waiting for apiserver healthz status ...
	I0828 17:16:57.672211   29200 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0828 17:16:57.676355   29200 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0828 17:16:57.676423   29200 round_trippers.go:463] GET https://192.168.39.227:8443/version
	I0828 17:16:57.676433   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.676444   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.676452   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.677394   29200 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0828 17:16:57.677452   29200 api_server.go:141] control plane version: v1.31.0
	I0828 17:16:57.677468   29200 api_server.go:131] duration metric: took 5.26686ms to wait for apiserver health ...
	I0828 17:16:57.677480   29200 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 17:16:57.848823   29200 request.go:632] Waited for 171.25665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:16:57.848874   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:16:57.848880   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.848887   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.848892   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.854303   29200 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0828 17:16:57.860479   29200 system_pods.go:59] 24 kube-system pods found
	I0828 17:16:57.860505   29200 system_pods.go:61] "coredns-6f6b679f8f-wtzml" [424f87f7-0221-432d-a04f-8f276386be98] Running
	I0828 17:16:57.860510   29200 system_pods.go:61] "coredns-6f6b679f8f-x562s" [78fab040-ae1a-425e-9dc5-e10594b84b9f] Running
	I0828 17:16:57.860514   29200 system_pods.go:61] "etcd-ha-240486" [8a6cf9e2-f806-44ae-b6ef-2a522dc2f516] Running
	I0828 17:16:57.860517   29200 system_pods.go:61] "etcd-ha-240486-m02" [2053f850-310f-46b3-b3d0-a2dbcf97dd70] Running
	I0828 17:16:57.860520   29200 system_pods.go:61] "etcd-ha-240486-m03" [a43d3636-8296-40e7-8975-fb113ef5e8db] Running
	I0828 17:16:57.860523   29200 system_pods.go:61] "kindnet-bgr7f" [8c938a5d-5f3b-487b-a422-94cfda96c35d] Running
	I0828 17:16:57.860527   29200 system_pods.go:61] "kindnet-pb8m7" [67180991-ca3a-4cfb-ba43-919c64d68657] Running
	I0828 17:16:57.860530   29200 system_pods.go:61] "kindnet-q9q9q" [2915b192-297e-4d73-802a-37660942c8c1] Running
	I0828 17:16:57.860533   29200 system_pods.go:61] "kube-apiserver-ha-240486" [e2c0b6cc-87e7-4ae4-823f-c51b100d056d] Running
	I0828 17:16:57.860538   29200 system_pods.go:61] "kube-apiserver-ha-240486-m02" [ead49a23-e0f0-4f8f-b327-6cd1d648ff65] Running
	I0828 17:16:57.860541   29200 system_pods.go:61] "kube-apiserver-ha-240486-m03" [9d4a7b86-acd1-4cbd-a97b-1a3269adeff7] Running
	I0828 17:16:57.860544   29200 system_pods.go:61] "kube-controller-manager-ha-240486" [1b0f6cba-56b3-4e54-b3fc-d5dba431f647] Running
	I0828 17:16:57.860549   29200 system_pods.go:61] "kube-controller-manager-ha-240486-m02" [20c49f1a-4f3d-4ed1-bca3-7efa53c61e4e] Running
	I0828 17:16:57.860552   29200 system_pods.go:61] "kube-controller-manager-ha-240486-m03" [cad610de-6a16-4347-9f6a-8d8a8b5bda54] Running
	I0828 17:16:57.860556   29200 system_pods.go:61] "kube-proxy-4w7tt" [5188f77d-e0ea-4e42-a5c4-173a8d7680dd] Running
	I0828 17:16:57.860559   29200 system_pods.go:61] "kube-proxy-jdnzs" [9c500e4d-bea4-4389-aca7-ebf805f2e642] Running
	I0828 17:16:57.860562   29200 system_pods.go:61] "kube-proxy-ktw9z" [d53ddde6-1a83-498f-90bb-ea71dce1d595] Running
	I0828 17:16:57.860565   29200 system_pods.go:61] "kube-scheduler-ha-240486" [ca5398d3-c263-4a18-9f9e-554bf50bf7d4] Running
	I0828 17:16:57.860568   29200 system_pods.go:61] "kube-scheduler-ha-240486-m02" [030ee5b8-449b-48ed-aaf4-ff4afeb8cae2] Running
	I0828 17:16:57.860570   29200 system_pods.go:61] "kube-scheduler-ha-240486-m03" [73dc0f31-c42b-4ee4-8d92-8ac9f09d2f06] Running
	I0828 17:16:57.860574   29200 system_pods.go:61] "kube-vip-ha-240486" [f1caf9b0-cb2f-462f-be58-ee158739bb79] Running
	I0828 17:16:57.860578   29200 system_pods.go:61] "kube-vip-ha-240486-m02" [909bf826-9c16-458a-8721-9e9ddc2eda22] Running
	I0828 17:16:57.860581   29200 system_pods.go:61] "kube-vip-ha-240486-m03" [86259d01-d574-4408-892a-ed17b0b74e91] Running
	I0828 17:16:57.860584   29200 system_pods.go:61] "storage-provisioner" [83a920cf-9505-4ae6-bd10-2582b38ee29b] Running
	I0828 17:16:57.860590   29200 system_pods.go:74] duration metric: took 183.101069ms to wait for pod list to return data ...
	I0828 17:16:57.860600   29200 default_sa.go:34] waiting for default service account to be created ...
	I0828 17:16:58.049034   29200 request.go:632] Waited for 188.361616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0828 17:16:58.049099   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0828 17:16:58.049104   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:58.049111   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:58.049118   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:58.052878   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:58.052984   29200 default_sa.go:45] found service account: "default"
	I0828 17:16:58.052997   29200 default_sa.go:55] duration metric: took 192.392294ms for default service account to be created ...
	I0828 17:16:58.053004   29200 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 17:16:58.249510   29200 request.go:632] Waited for 196.434256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:16:58.249570   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:16:58.249577   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:58.249587   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:58.249597   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:58.257387   29200 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0828 17:16:58.263744   29200 system_pods.go:86] 24 kube-system pods found
	I0828 17:16:58.263769   29200 system_pods.go:89] "coredns-6f6b679f8f-wtzml" [424f87f7-0221-432d-a04f-8f276386be98] Running
	I0828 17:16:58.263775   29200 system_pods.go:89] "coredns-6f6b679f8f-x562s" [78fab040-ae1a-425e-9dc5-e10594b84b9f] Running
	I0828 17:16:58.263779   29200 system_pods.go:89] "etcd-ha-240486" [8a6cf9e2-f806-44ae-b6ef-2a522dc2f516] Running
	I0828 17:16:58.263783   29200 system_pods.go:89] "etcd-ha-240486-m02" [2053f850-310f-46b3-b3d0-a2dbcf97dd70] Running
	I0828 17:16:58.263786   29200 system_pods.go:89] "etcd-ha-240486-m03" [a43d3636-8296-40e7-8975-fb113ef5e8db] Running
	I0828 17:16:58.263790   29200 system_pods.go:89] "kindnet-bgr7f" [8c938a5d-5f3b-487b-a422-94cfda96c35d] Running
	I0828 17:16:58.263793   29200 system_pods.go:89] "kindnet-pb8m7" [67180991-ca3a-4cfb-ba43-919c64d68657] Running
	I0828 17:16:58.263797   29200 system_pods.go:89] "kindnet-q9q9q" [2915b192-297e-4d73-802a-37660942c8c1] Running
	I0828 17:16:58.263799   29200 system_pods.go:89] "kube-apiserver-ha-240486" [e2c0b6cc-87e7-4ae4-823f-c51b100d056d] Running
	I0828 17:16:58.263804   29200 system_pods.go:89] "kube-apiserver-ha-240486-m02" [ead49a23-e0f0-4f8f-b327-6cd1d648ff65] Running
	I0828 17:16:58.263810   29200 system_pods.go:89] "kube-apiserver-ha-240486-m03" [9d4a7b86-acd1-4cbd-a97b-1a3269adeff7] Running
	I0828 17:16:58.263815   29200 system_pods.go:89] "kube-controller-manager-ha-240486" [1b0f6cba-56b3-4e54-b3fc-d5dba431f647] Running
	I0828 17:16:58.263821   29200 system_pods.go:89] "kube-controller-manager-ha-240486-m02" [20c49f1a-4f3d-4ed1-bca3-7efa53c61e4e] Running
	I0828 17:16:58.263829   29200 system_pods.go:89] "kube-controller-manager-ha-240486-m03" [cad610de-6a16-4347-9f6a-8d8a8b5bda54] Running
	I0828 17:16:58.263835   29200 system_pods.go:89] "kube-proxy-4w7tt" [5188f77d-e0ea-4e42-a5c4-173a8d7680dd] Running
	I0828 17:16:58.263845   29200 system_pods.go:89] "kube-proxy-jdnzs" [9c500e4d-bea4-4389-aca7-ebf805f2e642] Running
	I0828 17:16:58.263850   29200 system_pods.go:89] "kube-proxy-ktw9z" [d53ddde6-1a83-498f-90bb-ea71dce1d595] Running
	I0828 17:16:58.263853   29200 system_pods.go:89] "kube-scheduler-ha-240486" [ca5398d3-c263-4a18-9f9e-554bf50bf7d4] Running
	I0828 17:16:58.263857   29200 system_pods.go:89] "kube-scheduler-ha-240486-m02" [030ee5b8-449b-48ed-aaf4-ff4afeb8cae2] Running
	I0828 17:16:58.263863   29200 system_pods.go:89] "kube-scheduler-ha-240486-m03" [73dc0f31-c42b-4ee4-8d92-8ac9f09d2f06] Running
	I0828 17:16:58.263867   29200 system_pods.go:89] "kube-vip-ha-240486" [f1caf9b0-cb2f-462f-be58-ee158739bb79] Running
	I0828 17:16:58.263872   29200 system_pods.go:89] "kube-vip-ha-240486-m02" [909bf826-9c16-458a-8721-9e9ddc2eda22] Running
	I0828 17:16:58.263877   29200 system_pods.go:89] "kube-vip-ha-240486-m03" [86259d01-d574-4408-892a-ed17b0b74e91] Running
	I0828 17:16:58.263882   29200 system_pods.go:89] "storage-provisioner" [83a920cf-9505-4ae6-bd10-2582b38ee29b] Running
	I0828 17:16:58.263888   29200 system_pods.go:126] duration metric: took 210.877499ms to wait for k8s-apps to be running ...
	I0828 17:16:58.263898   29200 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 17:16:58.263948   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:16:58.279096   29200 system_svc.go:56] duration metric: took 15.178702ms WaitForService to wait for kubelet
	I0828 17:16:58.279128   29200 kubeadm.go:582] duration metric: took 23.560183555s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:16:58.279150   29200 node_conditions.go:102] verifying NodePressure condition ...
	I0828 17:16:58.448629   29200 request.go:632] Waited for 169.400673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes
	I0828 17:16:58.448688   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes
	I0828 17:16:58.448697   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:58.448705   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:58.448709   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:58.452448   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:58.453479   29200 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 17:16:58.453499   29200 node_conditions.go:123] node cpu capacity is 2
	I0828 17:16:58.453510   29200 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 17:16:58.453514   29200 node_conditions.go:123] node cpu capacity is 2
	I0828 17:16:58.453518   29200 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 17:16:58.453521   29200 node_conditions.go:123] node cpu capacity is 2
	I0828 17:16:58.453525   29200 node_conditions.go:105] duration metric: took 174.369219ms to run NodePressure ...
	I0828 17:16:58.453535   29200 start.go:241] waiting for startup goroutines ...
	I0828 17:16:58.453554   29200 start.go:255] writing updated cluster config ...
	I0828 17:16:58.453813   29200 ssh_runner.go:195] Run: rm -f paused
	I0828 17:16:58.504720   29200 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 17:16:58.506500   29200 out.go:177] * Done! kubectl is now configured to use "ha-240486" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.484175421Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09db039c-6ad5-4744-a788-c8be910c6002 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.484643449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865634484626045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09db039c-6ad5-4744-a788-c8be910c6002 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.490503512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20a44772-7216-4f90-a10f-6f883dc1a1fd name=/runtime.v1.RuntimeService/Version
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.490577287Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20a44772-7216-4f90-a10f-6f883dc1a1fd name=/runtime.v1.RuntimeService/Version
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.491898898Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccc2b3e6-709c-49a7-9d22-b421dff89cd3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.492356328Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865634492337074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccc2b3e6-709c-49a7-9d22-b421dff89cd3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.492897147Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a86cd8d5-6a95-4f40-bc7b-530f7a8b4e21 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.492985994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a86cd8d5-6a95-4f40-bc7b-530f7a8b4e21 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.493454598Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865423382452291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285217716166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285212462904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa8b2f45c32d1c7fe1af7e793aec51df9598c41c99ee687cd40be8d88331bfb,PodSandboxId:9d51ffa046dff43d72be361cb1094bda9fbf79e1f5066caf2d7feb976ad4b6f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724865285167345783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724865273106299231,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172486526
9335959801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e264b3c2fcf6e9bcb36188bf8220e3a34460fe9740c6d4df332c937aa3d73846,PodSandboxId:8e53bb3dd1994dc2372503ba92a8a802408f1416e548b5d45ad1e8f8561f566b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172486526115
1684318,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ebf3af9de277a23996fed4129df261,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865258185171797,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865258176716922,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c141f787017a9c1c78a3b63460b101bc27bc575301ce774a245737347724883,PodSandboxId:b76fb735e82a8692cfc2d9c329c6a34ad1f05e8244bf1fb47d71d835bf2492d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865258118460580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594ab811e29b5ad78a8c0a590f754540129775e0b65bf3fb8ac3d05808cf6dbe,PodSandboxId:4aed61a54422501596e720a1e40c4d9fb8370a25dcd03f271aafdadd956e8a24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865258133523886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a86cd8d5-6a95-4f40-bc7b-530f7a8b4e21 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.497358562Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=d0bdf5c2-ea77-4024-97e5-bf32a67641da name=/runtime.v1.RuntimeService/Status
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.497424877Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d0bdf5c2-ea77-4024-97e5-bf32a67641da name=/runtime.v1.RuntimeService/Status
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.531909305Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8ecc047-ac18-42b0-9dfc-c42514d519a3 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.532037372Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8ecc047-ac18-42b0-9dfc-c42514d519a3 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.533145140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa80cc9a-0abe-4f68-b7d1-748654c4d043 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.533593617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865634533571156,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa80cc9a-0abe-4f68-b7d1-748654c4d043 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.534134493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa20031b-6652-47a1-a1a0-0a244a7a8f28 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.534205652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa20031b-6652-47a1-a1a0-0a244a7a8f28 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.534466407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865423382452291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285217716166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285212462904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa8b2f45c32d1c7fe1af7e793aec51df9598c41c99ee687cd40be8d88331bfb,PodSandboxId:9d51ffa046dff43d72be361cb1094bda9fbf79e1f5066caf2d7feb976ad4b6f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724865285167345783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724865273106299231,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172486526
9335959801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e264b3c2fcf6e9bcb36188bf8220e3a34460fe9740c6d4df332c937aa3d73846,PodSandboxId:8e53bb3dd1994dc2372503ba92a8a802408f1416e548b5d45ad1e8f8561f566b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172486526115
1684318,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ebf3af9de277a23996fed4129df261,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865258185171797,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865258176716922,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c141f787017a9c1c78a3b63460b101bc27bc575301ce774a245737347724883,PodSandboxId:b76fb735e82a8692cfc2d9c329c6a34ad1f05e8244bf1fb47d71d835bf2492d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865258118460580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594ab811e29b5ad78a8c0a590f754540129775e0b65bf3fb8ac3d05808cf6dbe,PodSandboxId:4aed61a54422501596e720a1e40c4d9fb8370a25dcd03f271aafdadd956e8a24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865258133523886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa20031b-6652-47a1-a1a0-0a244a7a8f28 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.570225463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c3b4436f-19de-41b4-a481-ec3458dc101a name=/runtime.v1.RuntimeService/Version
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.570315568Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3b4436f-19de-41b4-a481-ec3458dc101a name=/runtime.v1.RuntimeService/Version
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.571642213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5d29a10-10c5-4c79-b85c-46811fbc3b87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.572399356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865634572373428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5d29a10-10c5-4c79-b85c-46811fbc3b87 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.573017293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3627994d-3460-4456-9a49-ac752b642e91 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.573082281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3627994d-3460-4456-9a49-ac752b642e91 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:20:34 ha-240486 crio[664]: time="2024-08-28 17:20:34.573355771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865423382452291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285217716166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285212462904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa8b2f45c32d1c7fe1af7e793aec51df9598c41c99ee687cd40be8d88331bfb,PodSandboxId:9d51ffa046dff43d72be361cb1094bda9fbf79e1f5066caf2d7feb976ad4b6f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724865285167345783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724865273106299231,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172486526
9335959801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e264b3c2fcf6e9bcb36188bf8220e3a34460fe9740c6d4df332c937aa3d73846,PodSandboxId:8e53bb3dd1994dc2372503ba92a8a802408f1416e548b5d45ad1e8f8561f566b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172486526115
1684318,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ebf3af9de277a23996fed4129df261,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865258185171797,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865258176716922,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c141f787017a9c1c78a3b63460b101bc27bc575301ce774a245737347724883,PodSandboxId:b76fb735e82a8692cfc2d9c329c6a34ad1f05e8244bf1fb47d71d835bf2492d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865258118460580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594ab811e29b5ad78a8c0a590f754540129775e0b65bf3fb8ac3d05808cf6dbe,PodSandboxId:4aed61a54422501596e720a1e40c4d9fb8370a25dcd03f271aafdadd956e8a24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865258133523886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3627994d-3460-4456-9a49-ac752b642e91 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d5a3adee06612       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   23adeed9e41e9       busybox-7dff88458-tnmmz
	687020da7d252       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   375c7b919327c       coredns-6f6b679f8f-x562s
	5171fb49fa83b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   2efd086107969       coredns-6f6b679f8f-wtzml
	3aa8b2f45c32d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   9d51ffa046dff       storage-provisioner
	a200b18d5b49f       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   0d9937bfda982       kindnet-pb8m7
	5da7c6652ad91       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   762e2586bed26       kube-proxy-jdnzs
	e264b3c2fcf6e       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   8e53bb3dd1994       kube-vip-ha-240486
	1396de2dd1902       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      6 minutes ago       Running             kube-scheduler            0                   98bba66b20012       kube-scheduler-ha-240486
	6006f9215c80c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   2280901ed00fa       etcd-ha-240486
	594ab811e29b5       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      6 minutes ago       Running             kube-controller-manager   0                   4aed61a544225       kube-controller-manager-ha-240486
	6c141f787017a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      6 minutes ago       Running             kube-apiserver            0                   b76fb735e82a8       kube-apiserver-ha-240486
	
	
	==> coredns [5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc] <==
	[INFO] 10.244.0.4:54948 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000096404s
	[INFO] 10.244.0.4:39957 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002328593s
	[INFO] 10.244.1.2:42445 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000451167s
	[INFO] 10.244.3.2:36990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118942s
	[INFO] 10.244.3.2:49081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261149s
	[INFO] 10.244.3.2:35420 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157575s
	[INFO] 10.244.3.2:45145 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000273687s
	[INFO] 10.244.0.4:59568 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001810378s
	[INFO] 10.244.1.2:40640 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138766s
	[INFO] 10.244.1.2:36403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155827s
	[INFO] 10.244.1.2:57247 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096044s
	[INFO] 10.244.3.2:58745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021909s
	[INFO] 10.244.3.2:52666 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001012s
	[INFO] 10.244.3.2:55195 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202518s
	[INFO] 10.244.0.4:50754 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164536s
	[INFO] 10.244.0.4:52876 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113989s
	[INFO] 10.244.1.2:43752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149181s
	[INFO] 10.244.1.2:39336 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272379s
	[INFO] 10.244.1.2:54086 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000180306s
	[INFO] 10.244.1.2:35731 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186612s
	[INFO] 10.244.3.2:38396 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014603s
	[INFO] 10.244.3.2:37082 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155781s
	[INFO] 10.244.0.4:42529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117311s
	[INFO] 10.244.0.4:54981 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113539s
	[INFO] 10.244.0.4:46325 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065905s
	
	
	==> coredns [687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342] <==
	[INFO] 10.244.1.2:51840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159319s
	[INFO] 10.244.1.2:45908 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003464807s
	[INFO] 10.244.1.2:45832 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227329s
	[INFO] 10.244.1.2:55717 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010110062s
	[INFO] 10.244.1.2:36777 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189682s
	[INFO] 10.244.1.2:33751 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105145s
	[INFO] 10.244.1.2:34860 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088194s
	[INFO] 10.244.3.2:43474 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001844418s
	[INFO] 10.244.3.2:42113 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123683s
	[INFO] 10.244.3.2:54119 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001316499s
	[INFO] 10.244.3.2:41393 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061254s
	[INFO] 10.244.0.4:35761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174103s
	[INFO] 10.244.0.4:35492 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135318s
	[INFO] 10.244.0.4:41816 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037492s
	[INFO] 10.244.0.4:56198 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00165456s
	[INFO] 10.244.0.4:42294 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000034332s
	[INFO] 10.244.0.4:49049 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062307s
	[INFO] 10.244.0.4:43851 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000033836s
	[INFO] 10.244.1.2:53375 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119804s
	[INFO] 10.244.3.2:50434 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105903s
	[INFO] 10.244.0.4:41203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063169s
	[INFO] 10.244.0.4:51605 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004099s
	[INFO] 10.244.3.2:53550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157853s
	[INFO] 10.244.3.2:55570 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000261867s
	[INFO] 10.244.0.4:50195 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000278101s
	
	
	==> describe nodes <==
	Name:               ha-240486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T17_14_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:14:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:20:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:17:27 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:17:27 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:17:27 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:17:27 +0000   Wed, 28 Aug 2024 17:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-240486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b73dbe7f63fd4c3baf977a4b53641230
	  System UUID:                b73dbe7f-63fd-4c3b-af97-7a4b53641230
	  Boot ID:                    cb154fe5-0aad-4938-bd54-d2af34922b1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tnmmz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 coredns-6f6b679f8f-wtzml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m6s
	  kube-system                 coredns-6f6b679f8f-x562s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m6s
	  kube-system                 etcd-ha-240486                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m10s
	  kube-system                 kindnet-pb8m7                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m6s
	  kube-system                 kube-apiserver-ha-240486             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-controller-manager-ha-240486    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-proxy-jdnzs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-scheduler-ha-240486             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-vip-ha-240486                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m5s   kube-proxy       
	  Normal  Starting                 6m10s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m10s  kubelet          Node ha-240486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s  kubelet          Node ha-240486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s  kubelet          Node ha-240486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m7s   node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal  NodeReady                5m50s  kubelet          Node ha-240486 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal  RegisteredNode           3m55s  node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	
	
	Name:               ha-240486-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_15_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:15:14 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:18:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 28 Aug 2024 17:17:17 +0000   Wed, 28 Aug 2024 17:18:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 28 Aug 2024 17:17:17 +0000   Wed, 28 Aug 2024 17:18:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 28 Aug 2024 17:17:17 +0000   Wed, 28 Aug 2024 17:18:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 28 Aug 2024 17:17:17 +0000   Wed, 28 Aug 2024 17:18:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-240486-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9be8698d6a9a4f2dbc236b4faf8196d2
	  System UUID:                9be8698d-6a9a-4f2d-bc23-6b4faf8196d2
	  Boot ID:                    d7ccf2dd-2975-4d65-8e82-89ec9777ddfe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5pjcm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 etcd-ha-240486-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m18s
	  kube-system                 kindnet-q9q9q                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m20s
	  kube-system                 kube-apiserver-ha-240486-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-controller-manager-ha-240486-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-proxy-4w7tt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-scheduler-ha-240486-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-vip-ha-240486-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m16s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     5m20s                  cidrAllocator    Node ha-240486-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  5m20s (x8 over 5m20s)  kubelet          Node ha-240486-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m20s (x8 over 5m20s)  kubelet          Node ha-240486-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m20s (x7 over 5m20s)  kubelet          Node ha-240486-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m17s                  node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           5m12s                  node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  NodeNotReady             107s                   node-controller  Node ha-240486-m02 status is now: NodeNotReady
	
	
	Name:               ha-240486-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_16_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:16:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:20:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:17:31 +0000   Wed, 28 Aug 2024 17:16:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:17:31 +0000   Wed, 28 Aug 2024 17:16:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:17:31 +0000   Wed, 28 Aug 2024 17:16:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:17:31 +0000   Wed, 28 Aug 2024 17:16:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.28
	  Hostname:    ha-240486-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b793a5caef8d481e8356b8025697789a
	  System UUID:                b793a5ca-ef8d-481e-8356-b8025697789a
	  Boot ID:                    20c85c11-97db-4e9e-b2a2-d3ce088826f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dtp5b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 etcd-ha-240486-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m1s
	  kube-system                 kindnet-bgr7f                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m4s
	  kube-system                 kube-apiserver-ha-240486-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-controller-manager-ha-240486-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-proxy-ktw9z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-ha-240486-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-vip-ha-240486-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m59s                kube-proxy       
	  Normal  CIDRAssignmentFailed     4m4s                 cidrAllocator    Node ha-240486-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node ha-240486-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node ha-240486-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node ha-240486-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	  Normal  RegisteredNode           3m55s                node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	
	
	Name:               ha-240486-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_17_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:17:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:20:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:18:05 +0000   Wed, 28 Aug 2024 17:17:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:18:05 +0000   Wed, 28 Aug 2024 17:17:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:18:05 +0000   Wed, 28 Aug 2024 17:17:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:18:05 +0000   Wed, 28 Aug 2024 17:17:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    ha-240486-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2dbc2f47ba234abeb085dbeb264b66eb
	  System UUID:                2dbc2f47-ba23-4abe-b085-dbeb264b66eb
	  Boot ID:                    50d6dfb8-8ac7-4317-a369-7f2a4a221b1a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gngl7       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-jlk49    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  CIDRAssignmentFailed     3m1s                 cidrAllocator    Node ha-240486-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-240486-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-240486-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-240486-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-240486-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug28 17:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051317] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039079] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.719096] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.825427] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[Aug28 17:14] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.828086] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.054618] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049350] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.168729] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.141709] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.274713] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.746562] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.326521] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.055344] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.078869] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.096594] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.273661] kauditd_printk_skb: 28 callbacks suppressed
	[ +15.597500] kauditd_printk_skb: 31 callbacks suppressed
	[Aug28 17:15] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594] <==
	{"level":"warn","ts":"2024-08-28T17:20:34.651258Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.693863Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.715328Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.752396Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"17a9362a46a02515","rtt":"8.624103ms","error":"dial tcp 192.168.39.103:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-28T17:20:34.752455Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"17a9362a46a02515","rtt":"916.984µs","error":"dial tcp 192.168.39.103:2380: i/o timeout"}
	{"level":"warn","ts":"2024-08-28T17:20:34.824963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.832303Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.836466Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.856856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.881113Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.889650Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.913277Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.917061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.917480Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.928259Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.936159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.952314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.972223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:34.985571Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:35.002430Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:35.006005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:35.010036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:35.015608Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:35.016719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:20:35.023047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:20:35 up 6 min,  0 users,  load average: 0.26, 0.15, 0.08
	Linux ha-240486 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79] <==
	I0828 17:20:04.042849       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:20:14.040464       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:20:14.040625       1 main.go:299] handling current node
	I0828 17:20:14.040662       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:20:14.040681       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:20:14.040858       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:20:14.040880       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:20:14.041058       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:20:14.041099       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:20:24.041232       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:20:24.041410       1 main.go:299] handling current node
	I0828 17:20:24.041525       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:20:24.041616       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:20:24.041801       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:20:24.041827       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:20:24.042053       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:20:24.042088       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:20:34.033122       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:20:34.033293       1 main.go:299] handling current node
	I0828 17:20:34.033332       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:20:34.033401       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:20:34.033619       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:20:34.033672       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:20:34.033778       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:20:34.033815       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [6c141f787017a9c1c78a3b63460b101bc27bc575301ce774a245737347724883] <==
	I0828 17:14:22.569383       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0828 17:14:22.579988       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227]
	I0828 17:14:22.581150       1 controller.go:615] quota admission added evaluator for: endpoints
	I0828 17:14:22.587465       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0828 17:14:23.022591       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0828 17:14:24.419442       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0828 17:14:24.434631       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0828 17:14:24.461480       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0828 17:14:28.425072       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0828 17:14:28.522380       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0828 17:17:04.945281       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33858: use of closed network connection
	E0828 17:17:05.130326       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33890: use of closed network connection
	E0828 17:17:05.322972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33914: use of closed network connection
	E0828 17:17:05.514871       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33946: use of closed network connection
	E0828 17:17:05.689135       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33964: use of closed network connection
	E0828 17:17:05.881850       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39650: use of closed network connection
	E0828 17:17:06.051533       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39672: use of closed network connection
	E0828 17:17:06.227751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39704: use of closed network connection
	E0828 17:17:06.414739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39728: use of closed network connection
	E0828 17:17:06.701391       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39750: use of closed network connection
	E0828 17:17:06.869402       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39766: use of closed network connection
	E0828 17:17:07.047700       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39778: use of closed network connection
	E0828 17:17:07.212717       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39798: use of closed network connection
	E0828 17:17:07.387973       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39816: use of closed network connection
	E0828 17:17:07.563040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39834: use of closed network connection
	
	
	==> kube-controller-manager [594ab811e29b5ad78a8c0a590f754540129775e0b65bf3fb8ac3d05808cf6dbe] <==
	I0828 17:17:34.685653       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-240486-m04" podCIDRs=["10.244.4.0/24"]
	I0828 17:17:34.685741       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:34.685774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:34.698367       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:34.960838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:35.376370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:37.302699       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:37.952212       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-240486-m04"
	I0828 17:17:37.952796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:38.060437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:39.395052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:39.511778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:44.787421       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:54.478777       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-240486-m04"
	I0828 17:17:54.481090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:54.496818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:57.246174       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:18:05.084745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:18:47.977778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m02"
	I0828 17:18:47.978358       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-240486-m04"
	I0828 17:18:48.009732       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m02"
	I0828 17:18:48.047911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.548886ms"
	I0828 17:18:48.048427       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.961µs"
	I0828 17:18:49.445214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m02"
	I0828 17:18:53.156392       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m02"
	
	
	==> kube-proxy [5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 17:14:29.691987       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 17:14:29.704718       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0828 17:14:29.704803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 17:14:29.770515       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 17:14:29.770605       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 17:14:29.770636       1 server_linux.go:169] "Using iptables Proxier"
	I0828 17:14:29.772841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 17:14:29.773155       1 server.go:483] "Version info" version="v1.31.0"
	I0828 17:14:29.773186       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:14:29.774614       1 config.go:197] "Starting service config controller"
	I0828 17:14:29.774660       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 17:14:29.774714       1 config.go:104] "Starting endpoint slice config controller"
	I0828 17:14:29.774731       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 17:14:29.775271       1 config.go:326] "Starting node config controller"
	I0828 17:14:29.775300       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 17:14:29.875056       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 17:14:29.875142       1 shared_informer.go:320] Caches are synced for service config
	I0828 17:14:29.875473       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096] <==
	W0828 17:14:21.915511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 17:14:21.915596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:14:22.067534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0828 17:14:22.068774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:14:22.116277       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 17:14:22.116446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 17:14:22.120371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0828 17:14:22.120551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:14:22.154213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 17:14:22.154396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:14:22.444089       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 17:14:22.444262       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0828 17:14:25.491027       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0828 17:16:59.369035       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-dtp5b\": pod busybox-7dff88458-dtp5b is already assigned to node \"ha-240486-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-dtp5b" node="ha-240486-m02"
	E0828 17:16:59.374021       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-dtp5b\": pod busybox-7dff88458-dtp5b is already assigned to node \"ha-240486-m03\"" pod="default/busybox-7dff88458-dtp5b"
	I0828 17:16:59.390834       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="b03278c7-1983-4812-bb23-509106ace2c2" pod="default/busybox-7dff88458-5pjcm" assumedNode="ha-240486-m02" currentNode="ha-240486-m03"
	I0828 17:16:59.407998       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="e4608982-afdd-491b-8fdb-ede6a6a4167a" pod="default/busybox-7dff88458-tnmmz" assumedNode="ha-240486" currentNode="ha-240486-m02"
	E0828 17:16:59.417678       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5pjcm\": pod busybox-7dff88458-5pjcm is already assigned to node \"ha-240486-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-5pjcm" node="ha-240486-m03"
	E0828 17:16:59.424827       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b03278c7-1983-4812-bb23-509106ace2c2(default/busybox-7dff88458-5pjcm) was assumed on ha-240486-m03 but assigned to ha-240486-m02" pod="default/busybox-7dff88458-5pjcm"
	E0828 17:16:59.428003       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5pjcm\": pod busybox-7dff88458-5pjcm is already assigned to node \"ha-240486-m02\"" pod="default/busybox-7dff88458-5pjcm"
	I0828 17:16:59.428093       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5pjcm" node="ha-240486-m02"
	E0828 17:16:59.424465       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-tnmmz\": pod busybox-7dff88458-tnmmz is already assigned to node \"ha-240486\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-tnmmz" node="ha-240486-m02"
	E0828 17:16:59.428536       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e4608982-afdd-491b-8fdb-ede6a6a4167a(default/busybox-7dff88458-tnmmz) was assumed on ha-240486-m02 but assigned to ha-240486" pod="default/busybox-7dff88458-tnmmz"
	E0828 17:16:59.428571       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-tnmmz\": pod busybox-7dff88458-tnmmz is already assigned to node \"ha-240486\"" pod="default/busybox-7dff88458-tnmmz"
	I0828 17:16:59.428617       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-tnmmz" node="ha-240486"
	
	
	==> kubelet <==
	Aug 28 17:19:24 ha-240486 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 17:19:24 ha-240486 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 17:19:24 ha-240486 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:19:24 ha-240486 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:19:24 ha-240486 kubelet[1308]: E0828 17:19:24.472866    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865564472569512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:19:24 ha-240486 kubelet[1308]: E0828 17:19:24.472905    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865564472569512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:19:34 ha-240486 kubelet[1308]: E0828 17:19:34.473904    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865574473677861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:19:34 ha-240486 kubelet[1308]: E0828 17:19:34.473986    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865574473677861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:19:44 ha-240486 kubelet[1308]: E0828 17:19:44.475989    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865584475611653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:19:44 ha-240486 kubelet[1308]: E0828 17:19:44.476032    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865584475611653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:19:54 ha-240486 kubelet[1308]: E0828 17:19:54.477235    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865594476729982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:19:54 ha-240486 kubelet[1308]: E0828 17:19:54.477274    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865594476729982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:04 ha-240486 kubelet[1308]: E0828 17:20:04.479451    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865604479108338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:04 ha-240486 kubelet[1308]: E0828 17:20:04.479495    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865604479108338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:14 ha-240486 kubelet[1308]: E0828 17:20:14.481545    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865614481070591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:14 ha-240486 kubelet[1308]: E0828 17:20:14.481585    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865614481070591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:24 ha-240486 kubelet[1308]: E0828 17:20:24.386310    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 17:20:24 ha-240486 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 17:20:24 ha-240486 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 17:20:24 ha-240486 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:20:24 ha-240486 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:20:24 ha-240486 kubelet[1308]: E0828 17:20:24.483299    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865624483053307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:24 ha-240486 kubelet[1308]: E0828 17:20:24.483337    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865624483053307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:34 ha-240486 kubelet[1308]: E0828 17:20:34.484872    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865634484626045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:34 ha-240486 kubelet[1308]: E0828 17:20:34.484970    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865634484626045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-240486 -n ha-240486
helpers_test.go:261: (dbg) Run:  kubectl --context ha-240486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr: exit status 3 (3.196339056s)

                                                
                                                
-- stdout --
	ha-240486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-240486-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:20:39.527479   34403 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:20:39.527605   34403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:20:39.527613   34403 out.go:358] Setting ErrFile to fd 2...
	I0828 17:20:39.527618   34403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:20:39.527800   34403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:20:39.527959   34403 out.go:352] Setting JSON to false
	I0828 17:20:39.527985   34403 mustload.go:65] Loading cluster: ha-240486
	I0828 17:20:39.528100   34403 notify.go:220] Checking for updates...
	I0828 17:20:39.528441   34403 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:20:39.528458   34403 status.go:255] checking status of ha-240486 ...
	I0828 17:20:39.528963   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:39.529025   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:39.547182   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40885
	I0828 17:20:39.547658   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:39.548245   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:39.548265   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:39.548623   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:39.548819   34403 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:20:39.550605   34403 status.go:330] ha-240486 host status = "Running" (err=<nil>)
	I0828 17:20:39.550623   34403 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:20:39.550929   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:39.550973   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:39.566518   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0828 17:20:39.566920   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:39.567311   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:39.567329   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:39.567702   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:39.567901   34403 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:20:39.570441   34403 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:39.571033   34403 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:20:39.571066   34403 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:39.571224   34403 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:20:39.571685   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:39.571729   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:39.586149   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42081
	I0828 17:20:39.586501   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:39.586886   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:39.586903   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:39.587213   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:39.587406   34403 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:20:39.587626   34403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:39.587656   34403 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:20:39.590285   34403 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:39.590705   34403 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:20:39.590738   34403 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:39.590821   34403 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:20:39.590975   34403 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:20:39.591131   34403 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:20:39.591295   34403 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:20:39.679568   34403 ssh_runner.go:195] Run: systemctl --version
	I0828 17:20:39.686699   34403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:39.703888   34403 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:20:39.703917   34403 api_server.go:166] Checking apiserver status ...
	I0828 17:20:39.703949   34403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:20:39.719171   34403 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup
	W0828 17:20:39.734376   34403 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:20:39.734435   34403 ssh_runner.go:195] Run: ls
	I0828 17:20:39.739120   34403 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:20:39.743952   34403 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:20:39.743971   34403 status.go:422] ha-240486 apiserver status = Running (err=<nil>)
	I0828 17:20:39.743979   34403 status.go:257] ha-240486 status: &{Name:ha-240486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:20:39.743993   34403 status.go:255] checking status of ha-240486-m02 ...
	I0828 17:20:39.744266   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:39.744301   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:39.759442   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0828 17:20:39.759861   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:39.760286   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:39.760309   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:39.760636   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:39.760813   34403 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:20:39.762463   34403 status.go:330] ha-240486-m02 host status = "Running" (err=<nil>)
	I0828 17:20:39.762479   34403 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:20:39.762770   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:39.762814   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:39.777479   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33053
	I0828 17:20:39.777902   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:39.778409   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:39.778432   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:39.778773   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:39.778951   34403 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:20:39.782128   34403 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:39.782594   34403 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:20:39.782623   34403 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:39.782758   34403 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:20:39.783064   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:39.783113   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:39.798065   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33465
	I0828 17:20:39.798576   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:39.799117   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:39.799137   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:39.799442   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:39.799609   34403 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:20:39.799808   34403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:39.799831   34403 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:20:39.802642   34403 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:39.803080   34403 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:20:39.803103   34403 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:39.803223   34403 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:20:39.803369   34403 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:20:39.803483   34403 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:20:39.803662   34403 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	W0828 17:20:42.346313   34403 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.103:22: connect: no route to host
	W0828 17:20:42.346393   34403 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	E0828 17:20:42.346407   34403 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:20:42.346417   34403 status.go:257] ha-240486-m02 status: &{Name:ha-240486-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0828 17:20:42.346433   34403 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:20:42.346443   34403 status.go:255] checking status of ha-240486-m03 ...
	I0828 17:20:42.346829   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:42.346871   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:42.361607   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0828 17:20:42.361982   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:42.362452   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:42.362486   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:42.362834   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:42.363023   34403 main.go:141] libmachine: (ha-240486-m03) Calling .GetState
	I0828 17:20:42.364497   34403 status.go:330] ha-240486-m03 host status = "Running" (err=<nil>)
	I0828 17:20:42.364515   34403 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:20:42.364921   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:42.364956   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:42.379847   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0828 17:20:42.380198   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:42.380737   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:42.380756   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:42.381062   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:42.381238   34403 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:20:42.383891   34403 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:42.384275   34403 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:20:42.384296   34403 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:42.384429   34403 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:20:42.384725   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:42.384758   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:42.398977   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40977
	I0828 17:20:42.399362   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:42.399819   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:42.399837   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:42.400113   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:42.400310   34403 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:20:42.400483   34403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:42.400502   34403 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:20:42.403284   34403 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:42.403692   34403 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:20:42.403731   34403 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:42.403859   34403 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:20:42.404029   34403 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:20:42.404163   34403 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:20:42.404329   34403 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:20:42.481486   34403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:42.499364   34403 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:20:42.499398   34403 api_server.go:166] Checking apiserver status ...
	I0828 17:20:42.499442   34403 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:20:42.512712   34403 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	W0828 17:20:42.521988   34403 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:20:42.522149   34403 ssh_runner.go:195] Run: ls
	I0828 17:20:42.526400   34403 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:20:42.532693   34403 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:20:42.532716   34403 status.go:422] ha-240486-m03 apiserver status = Running (err=<nil>)
	I0828 17:20:42.532725   34403 status.go:257] ha-240486-m03 status: &{Name:ha-240486-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:20:42.532740   34403 status.go:255] checking status of ha-240486-m04 ...
	I0828 17:20:42.533104   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:42.533152   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:42.548602   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46183
	I0828 17:20:42.549015   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:42.549523   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:42.549542   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:42.549878   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:42.550065   34403 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:20:42.551769   34403 status.go:330] ha-240486-m04 host status = "Running" (err=<nil>)
	I0828 17:20:42.551785   34403 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:20:42.552065   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:42.552096   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:42.567026   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46355
	I0828 17:20:42.567479   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:42.567917   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:42.567940   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:42.568223   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:42.568415   34403 main.go:141] libmachine: (ha-240486-m04) Calling .GetIP
	I0828 17:20:42.570974   34403 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:42.571359   34403 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:20:42.571390   34403 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:42.571521   34403 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:20:42.571863   34403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:42.571902   34403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:42.586697   34403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46561
	I0828 17:20:42.587132   34403 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:42.587622   34403 main.go:141] libmachine: Using API Version  1
	I0828 17:20:42.587642   34403 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:42.587975   34403 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:42.588179   34403 main.go:141] libmachine: (ha-240486-m04) Calling .DriverName
	I0828 17:20:42.588343   34403 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:42.588370   34403 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHHostname
	I0828 17:20:42.591025   34403 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:42.591402   34403 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:20:42.591423   34403 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:42.591514   34403 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHPort
	I0828 17:20:42.591683   34403 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHKeyPath
	I0828 17:20:42.591814   34403 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHUsername
	I0828 17:20:42.591968   34403 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m04/id_rsa Username:docker}
	I0828 17:20:42.668818   34403 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:42.682337   34403 status.go:257] ha-240486-m04 status: &{Name:ha-240486-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
E0828 17:20:44.100972   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr: exit status 3 (5.241884314s)

                                                
                                                
-- stdout --
	ha-240486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-240486-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:20:43.624284   34503 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:20:43.624510   34503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:20:43.624519   34503 out.go:358] Setting ErrFile to fd 2...
	I0828 17:20:43.624523   34503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:20:43.624679   34503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:20:43.624826   34503 out.go:352] Setting JSON to false
	I0828 17:20:43.624850   34503 mustload.go:65] Loading cluster: ha-240486
	I0828 17:20:43.624884   34503 notify.go:220] Checking for updates...
	I0828 17:20:43.625228   34503 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:20:43.625249   34503 status.go:255] checking status of ha-240486 ...
	I0828 17:20:43.625792   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:43.625845   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:43.644791   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46111
	I0828 17:20:43.645295   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:43.645872   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:43.645914   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:43.646259   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:43.646435   34503 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:20:43.647963   34503 status.go:330] ha-240486 host status = "Running" (err=<nil>)
	I0828 17:20:43.647976   34503 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:20:43.648275   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:43.648309   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:43.663144   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I0828 17:20:43.663521   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:43.663999   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:43.664017   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:43.664333   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:43.664491   34503 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:20:43.667014   34503 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:43.667464   34503 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:20:43.667500   34503 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:43.667566   34503 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:20:43.667943   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:43.667999   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:43.682495   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41075
	I0828 17:20:43.682893   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:43.683318   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:43.683340   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:43.683685   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:43.683858   34503 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:20:43.684032   34503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:43.684065   34503 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:20:43.686831   34503 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:43.687186   34503 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:20:43.687213   34503 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:43.687317   34503 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:20:43.687486   34503 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:20:43.687645   34503 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:20:43.687754   34503 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:20:43.768798   34503 ssh_runner.go:195] Run: systemctl --version
	I0828 17:20:43.774209   34503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:43.788512   34503 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:20:43.788543   34503 api_server.go:166] Checking apiserver status ...
	I0828 17:20:43.788583   34503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:20:43.802700   34503 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup
	W0828 17:20:43.811477   34503 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:20:43.811539   34503 ssh_runner.go:195] Run: ls
	I0828 17:20:43.815639   34503 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:20:43.821724   34503 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:20:43.821746   34503 status.go:422] ha-240486 apiserver status = Running (err=<nil>)
	I0828 17:20:43.821757   34503 status.go:257] ha-240486 status: &{Name:ha-240486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:20:43.821787   34503 status.go:255] checking status of ha-240486-m02 ...
	I0828 17:20:43.822195   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:43.822233   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:43.836646   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35239
	I0828 17:20:43.837077   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:43.837546   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:43.837565   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:43.837846   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:43.838012   34503 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:20:43.839578   34503 status.go:330] ha-240486-m02 host status = "Running" (err=<nil>)
	I0828 17:20:43.839594   34503 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:20:43.839869   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:43.839901   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:43.854461   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35323
	I0828 17:20:43.854823   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:43.855254   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:43.855276   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:43.855577   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:43.855782   34503 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:20:43.858828   34503 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:43.859276   34503 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:20:43.859394   34503 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:20:43.859463   34503 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:43.859784   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:43.859827   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:43.875369   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I0828 17:20:43.875743   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:43.876216   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:43.876236   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:43.876523   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:43.876715   34503 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:20:43.876869   34503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:43.876883   34503 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:20:43.879423   34503 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:43.879765   34503 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:20:43.879783   34503 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:43.879934   34503 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:20:43.880123   34503 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:20:43.880257   34503 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:20:43.880419   34503 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	W0828 17:20:45.418378   34503 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:20:45.418431   34503 retry.go:31] will retry after 272.915589ms: dial tcp 192.168.39.103:22: connect: no route to host
	W0828 17:20:48.490348   34503 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.103:22: connect: no route to host
	W0828 17:20:48.490467   34503 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	E0828 17:20:48.490490   34503 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:20:48.490501   34503 status.go:257] ha-240486-m02 status: &{Name:ha-240486-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0828 17:20:48.490524   34503 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:20:48.490541   34503 status.go:255] checking status of ha-240486-m03 ...
	I0828 17:20:48.490848   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:48.490900   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:48.505938   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45293
	I0828 17:20:48.506372   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:48.506789   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:48.506807   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:48.507138   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:48.507306   34503 main.go:141] libmachine: (ha-240486-m03) Calling .GetState
	I0828 17:20:48.508709   34503 status.go:330] ha-240486-m03 host status = "Running" (err=<nil>)
	I0828 17:20:48.508726   34503 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:20:48.509076   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:48.509114   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:48.523240   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43723
	I0828 17:20:48.523653   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:48.524100   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:48.524123   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:48.524405   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:48.524563   34503 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:20:48.527219   34503 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:48.527817   34503 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:20:48.527843   34503 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:48.527978   34503 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:20:48.528305   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:48.528343   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:48.542838   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35155
	I0828 17:20:48.543294   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:48.543771   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:48.543805   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:48.544124   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:48.544441   34503 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:20:48.544661   34503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:48.544685   34503 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:20:48.547396   34503 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:48.547818   34503 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:20:48.547837   34503 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:48.547993   34503 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:20:48.548163   34503 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:20:48.548319   34503 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:20:48.548464   34503 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:20:48.625202   34503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:48.641784   34503 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:20:48.641810   34503 api_server.go:166] Checking apiserver status ...
	I0828 17:20:48.641839   34503 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:20:48.655499   34503 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	W0828 17:20:48.664697   34503 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:20:48.664752   34503 ssh_runner.go:195] Run: ls
	I0828 17:20:48.669566   34503 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:20:48.673879   34503 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:20:48.673902   34503 status.go:422] ha-240486-m03 apiserver status = Running (err=<nil>)
	I0828 17:20:48.673910   34503 status.go:257] ha-240486-m03 status: &{Name:ha-240486-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:20:48.673923   34503 status.go:255] checking status of ha-240486-m04 ...
	I0828 17:20:48.674235   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:48.674269   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:48.689770   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0828 17:20:48.690195   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:48.690704   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:48.690724   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:48.691006   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:48.691150   34503 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:20:48.692579   34503 status.go:330] ha-240486-m04 host status = "Running" (err=<nil>)
	I0828 17:20:48.692595   34503 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:20:48.692868   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:48.692905   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:48.707464   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40249
	I0828 17:20:48.707944   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:48.708472   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:48.708496   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:48.708823   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:48.708995   34503 main.go:141] libmachine: (ha-240486-m04) Calling .GetIP
	I0828 17:20:48.711663   34503 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:48.712041   34503 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:20:48.712067   34503 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:48.712197   34503 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:20:48.712586   34503 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:48.712635   34503 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:48.727634   34503 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I0828 17:20:48.728001   34503 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:48.728440   34503 main.go:141] libmachine: Using API Version  1
	I0828 17:20:48.728461   34503 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:48.728757   34503 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:48.728929   34503 main.go:141] libmachine: (ha-240486-m04) Calling .DriverName
	I0828 17:20:48.729101   34503 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:48.729129   34503 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHHostname
	I0828 17:20:48.731622   34503 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:48.731985   34503 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:20:48.732019   34503 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:48.732152   34503 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHPort
	I0828 17:20:48.732298   34503 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHKeyPath
	I0828 17:20:48.732451   34503 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHUsername
	I0828 17:20:48.732579   34503 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m04/id_rsa Username:docker}
	I0828 17:20:48.809768   34503 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:48.824033   34503 status.go:257] ha-240486-m04 status: &{Name:ha-240486-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr: exit status 3 (4.920173912s)

                                                
                                                
-- stdout --
	ha-240486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-240486-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:20:50.090498   34603 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:20:50.090605   34603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:20:50.090614   34603 out.go:358] Setting ErrFile to fd 2...
	I0828 17:20:50.090617   34603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:20:50.090774   34603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:20:50.090931   34603 out.go:352] Setting JSON to false
	I0828 17:20:50.090954   34603 mustload.go:65] Loading cluster: ha-240486
	I0828 17:20:50.091008   34603 notify.go:220] Checking for updates...
	I0828 17:20:50.091287   34603 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:20:50.091299   34603 status.go:255] checking status of ha-240486 ...
	I0828 17:20:50.091679   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:50.091761   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:50.107231   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40783
	I0828 17:20:50.107659   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:50.108302   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:50.108320   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:50.108783   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:50.108982   34603 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:20:50.110529   34603 status.go:330] ha-240486 host status = "Running" (err=<nil>)
	I0828 17:20:50.110545   34603 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:20:50.110845   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:50.110885   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:50.125385   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I0828 17:20:50.125803   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:50.126297   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:50.126315   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:50.126631   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:50.126797   34603 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:20:50.129421   34603 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:50.129772   34603 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:20:50.129794   34603 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:50.129925   34603 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:20:50.130266   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:50.130331   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:50.145680   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0828 17:20:50.146069   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:50.146552   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:50.146578   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:50.146838   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:50.147013   34603 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:20:50.147173   34603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:50.147209   34603 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:20:50.149830   34603 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:50.150226   34603 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:20:50.150252   34603 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:50.150402   34603 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:20:50.150571   34603 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:20:50.150707   34603 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:20:50.150824   34603 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:20:50.230867   34603 ssh_runner.go:195] Run: systemctl --version
	I0828 17:20:50.240580   34603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:50.256118   34603 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:20:50.256151   34603 api_server.go:166] Checking apiserver status ...
	I0828 17:20:50.256182   34603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:20:50.270556   34603 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup
	W0828 17:20:50.280374   34603 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:20:50.280440   34603 ssh_runner.go:195] Run: ls
	I0828 17:20:50.284779   34603 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:20:50.290421   34603 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:20:50.290446   34603 status.go:422] ha-240486 apiserver status = Running (err=<nil>)
	I0828 17:20:50.290458   34603 status.go:257] ha-240486 status: &{Name:ha-240486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:20:50.290476   34603 status.go:255] checking status of ha-240486-m02 ...
	I0828 17:20:50.290818   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:50.290852   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:50.305777   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0828 17:20:50.306190   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:50.306705   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:50.306732   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:50.307068   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:50.307254   34603 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:20:50.308788   34603 status.go:330] ha-240486-m02 host status = "Running" (err=<nil>)
	I0828 17:20:50.308806   34603 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:20:50.309212   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:50.309278   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:50.325767   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0828 17:20:50.326253   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:50.326754   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:50.326771   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:50.327041   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:50.327179   34603 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:20:50.329924   34603 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:50.330394   34603 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:20:50.330416   34603 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:50.330544   34603 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:20:50.330829   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:50.330865   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:50.346096   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39181
	I0828 17:20:50.346491   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:50.347015   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:50.347039   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:50.347369   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:50.347569   34603 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:20:50.347752   34603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:50.347774   34603 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:20:50.350400   34603 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:50.350780   34603 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:20:50.350811   34603 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:50.350916   34603 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:20:50.351079   34603 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:20:50.351243   34603 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:20:50.351383   34603 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	W0828 17:20:51.562406   34603 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:20:51.562450   34603 retry.go:31] will retry after 242.153921ms: dial tcp 192.168.39.103:22: connect: no route to host
	W0828 17:20:54.634403   34603 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.103:22: connect: no route to host
	W0828 17:20:54.634502   34603 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	E0828 17:20:54.634539   34603 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:20:54.634552   34603 status.go:257] ha-240486-m02 status: &{Name:ha-240486-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0828 17:20:54.634587   34603 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:20:54.634601   34603 status.go:255] checking status of ha-240486-m03 ...
	I0828 17:20:54.635048   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:54.635101   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:54.650395   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35701
	I0828 17:20:54.650825   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:54.651251   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:54.651272   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:54.651555   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:54.651736   34603 main.go:141] libmachine: (ha-240486-m03) Calling .GetState
	I0828 17:20:54.653307   34603 status.go:330] ha-240486-m03 host status = "Running" (err=<nil>)
	I0828 17:20:54.653320   34603 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:20:54.653608   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:54.653638   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:54.668472   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36933
	I0828 17:20:54.668910   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:54.669390   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:54.669411   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:54.669716   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:54.669901   34603 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:20:54.672695   34603 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:54.673077   34603 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:20:54.673109   34603 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:54.673311   34603 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:20:54.673720   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:54.673761   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:54.688659   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44699
	I0828 17:20:54.689082   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:54.689526   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:54.689556   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:54.689869   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:54.690113   34603 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:20:54.690317   34603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:54.690337   34603 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:20:54.692724   34603 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:54.693080   34603 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:20:54.693110   34603 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:20:54.693240   34603 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:20:54.693410   34603 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:20:54.693568   34603 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:20:54.693707   34603 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:20:54.773762   34603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:54.787371   34603 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:20:54.787397   34603 api_server.go:166] Checking apiserver status ...
	I0828 17:20:54.787428   34603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:20:54.801078   34603 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	W0828 17:20:54.810088   34603 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:20:54.810151   34603 ssh_runner.go:195] Run: ls
	I0828 17:20:54.813947   34603 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:20:54.818341   34603 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:20:54.818375   34603 status.go:422] ha-240486-m03 apiserver status = Running (err=<nil>)
	I0828 17:20:54.818386   34603 status.go:257] ha-240486-m03 status: &{Name:ha-240486-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:20:54.818403   34603 status.go:255] checking status of ha-240486-m04 ...
	I0828 17:20:54.818791   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:54.818838   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:54.834571   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
	I0828 17:20:54.834990   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:54.835517   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:54.835531   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:54.835826   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:54.835989   34603 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:20:54.837443   34603 status.go:330] ha-240486-m04 host status = "Running" (err=<nil>)
	I0828 17:20:54.837459   34603 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:20:54.837741   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:54.837776   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:54.854009   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45339
	I0828 17:20:54.854402   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:54.854853   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:54.854873   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:54.855157   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:54.855361   34603 main.go:141] libmachine: (ha-240486-m04) Calling .GetIP
	I0828 17:20:54.857774   34603 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:54.858203   34603 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:20:54.858252   34603 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:54.858392   34603 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:20:54.858664   34603 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:54.858695   34603 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:54.873124   34603 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42593
	I0828 17:20:54.873457   34603 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:54.873940   34603 main.go:141] libmachine: Using API Version  1
	I0828 17:20:54.873959   34603 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:54.874257   34603 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:54.874433   34603 main.go:141] libmachine: (ha-240486-m04) Calling .DriverName
	I0828 17:20:54.874605   34603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:54.874626   34603 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHHostname
	I0828 17:20:54.876968   34603 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:54.877338   34603 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:20:54.877372   34603 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:20:54.877485   34603 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHPort
	I0828 17:20:54.877650   34603 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHKeyPath
	I0828 17:20:54.877784   34603 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHUsername
	I0828 17:20:54.877896   34603 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m04/id_rsa Username:docker}
	I0828 17:20:54.953190   34603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:54.966888   34603 status.go:257] ha-240486-m04 status: &{Name:ha-240486-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr: exit status 3 (4.811059368s)

                                                
                                                
-- stdout --
	ha-240486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-240486-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:20:56.351449   34720 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:20:56.351706   34720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:20:56.351717   34720 out.go:358] Setting ErrFile to fd 2...
	I0828 17:20:56.351730   34720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:20:56.351894   34720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:20:56.352067   34720 out.go:352] Setting JSON to false
	I0828 17:20:56.352098   34720 mustload.go:65] Loading cluster: ha-240486
	I0828 17:20:56.352223   34720 notify.go:220] Checking for updates...
	I0828 17:20:56.352542   34720 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:20:56.352558   34720 status.go:255] checking status of ha-240486 ...
	I0828 17:20:56.353000   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:56.353066   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:56.373399   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43669
	I0828 17:20:56.373812   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:56.374369   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:20:56.374392   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:56.374690   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:56.374879   34720 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:20:56.376390   34720 status.go:330] ha-240486 host status = "Running" (err=<nil>)
	I0828 17:20:56.376403   34720 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:20:56.376727   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:56.376763   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:56.391691   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39439
	I0828 17:20:56.392074   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:56.392656   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:20:56.392707   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:56.393027   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:56.393240   34720 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:20:56.395987   34720 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:56.396381   34720 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:20:56.396449   34720 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:56.396540   34720 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:20:56.396858   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:56.396893   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:56.411540   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44641
	I0828 17:20:56.411869   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:56.412356   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:20:56.412393   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:56.412697   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:56.412900   34720 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:20:56.413079   34720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:56.413107   34720 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:20:56.415983   34720 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:56.416346   34720 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:20:56.416367   34720 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:20:56.416530   34720 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:20:56.416702   34720 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:20:56.416871   34720 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:20:56.417009   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:20:56.497655   34720 ssh_runner.go:195] Run: systemctl --version
	I0828 17:20:56.503307   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:20:56.516685   34720 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:20:56.516723   34720 api_server.go:166] Checking apiserver status ...
	I0828 17:20:56.516774   34720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:20:56.529442   34720 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup
	W0828 17:20:56.538184   34720 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:20:56.538256   34720 ssh_runner.go:195] Run: ls
	I0828 17:20:56.542396   34720 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:20:56.546927   34720 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:20:56.546950   34720 status.go:422] ha-240486 apiserver status = Running (err=<nil>)
	I0828 17:20:56.546966   34720 status.go:257] ha-240486 status: &{Name:ha-240486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:20:56.546987   34720 status.go:255] checking status of ha-240486-m02 ...
	I0828 17:20:56.547279   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:56.547311   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:56.563107   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37209
	I0828 17:20:56.563552   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:56.563978   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:20:56.563996   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:56.564288   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:56.564473   34720 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:20:56.565932   34720 status.go:330] ha-240486-m02 host status = "Running" (err=<nil>)
	I0828 17:20:56.565946   34720 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:20:56.566263   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:56.566304   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:56.581281   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41761
	I0828 17:20:56.581624   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:56.582128   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:20:56.582151   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:56.582462   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:56.582627   34720 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:20:56.585344   34720 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:56.585756   34720 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:20:56.585780   34720 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:56.585883   34720 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:20:56.586228   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:20:56.586272   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:20:56.600794   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41569
	I0828 17:20:56.601182   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:20:56.601653   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:20:56.601672   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:20:56.601940   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:20:56.602096   34720 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:20:56.602280   34720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:20:56.602299   34720 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:20:56.604882   34720 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:56.605250   34720 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:20:56.605275   34720 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:20:56.605411   34720 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:20:56.605550   34720 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:20:56.605694   34720 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:20:56.605839   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	W0828 17:20:57.706385   34720 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:20:57.706436   34720 retry.go:31] will retry after 169.848805ms: dial tcp 192.168.39.103:22: connect: no route to host
	W0828 17:21:00.778357   34720 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.103:22: connect: no route to host
	W0828 17:21:00.778429   34720 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	E0828 17:21:00.778442   34720 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:21:00.778452   34720 status.go:257] ha-240486-m02 status: &{Name:ha-240486-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0828 17:21:00.778471   34720 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:21:00.778478   34720 status.go:255] checking status of ha-240486-m03 ...
	I0828 17:21:00.778778   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:00.778817   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:00.793841   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42947
	I0828 17:21:00.794271   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:00.794745   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:21:00.794768   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:00.795085   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:00.795256   34720 main.go:141] libmachine: (ha-240486-m03) Calling .GetState
	I0828 17:21:00.797301   34720 status.go:330] ha-240486-m03 host status = "Running" (err=<nil>)
	I0828 17:21:00.797316   34720 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:21:00.797620   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:00.797660   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:00.813022   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0828 17:21:00.813525   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:00.813953   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:21:00.813971   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:00.814260   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:00.814420   34720 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:21:00.817384   34720 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:00.817834   34720 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:21:00.817852   34720 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:00.818015   34720 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:21:00.818321   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:00.818357   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:00.832335   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0828 17:21:00.832725   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:00.833159   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:21:00.833178   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:00.833440   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:00.833615   34720 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:21:00.833792   34720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:00.833809   34720 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:21:00.836301   34720 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:00.836700   34720 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:21:00.836721   34720 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:00.836853   34720 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:21:00.837031   34720 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:21:00.837174   34720 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:21:00.837345   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:21:00.917951   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:21:00.932854   34720 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:21:00.932893   34720 api_server.go:166] Checking apiserver status ...
	I0828 17:21:00.932934   34720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:21:00.947177   34720 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	W0828 17:21:00.956710   34720 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:21:00.956761   34720 ssh_runner.go:195] Run: ls
	I0828 17:21:00.961082   34720 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:21:00.965305   34720 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:21:00.965322   34720 status.go:422] ha-240486-m03 apiserver status = Running (err=<nil>)
	I0828 17:21:00.965330   34720 status.go:257] ha-240486-m03 status: &{Name:ha-240486-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:21:00.965342   34720 status.go:255] checking status of ha-240486-m04 ...
	I0828 17:21:00.965615   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:00.965655   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:00.980716   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0828 17:21:00.981127   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:00.981627   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:21:00.981642   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:00.982020   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:00.982311   34720 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:21:00.983753   34720 status.go:330] ha-240486-m04 host status = "Running" (err=<nil>)
	I0828 17:21:00.983770   34720 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:21:00.984044   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:00.984076   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:00.998930   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40243
	I0828 17:21:00.999339   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:00.999799   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:21:00.999827   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:01.000150   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:01.000372   34720 main.go:141] libmachine: (ha-240486-m04) Calling .GetIP
	I0828 17:21:01.003174   34720 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:01.003563   34720 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:21:01.003582   34720 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:01.003745   34720 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:21:01.004033   34720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:01.004069   34720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:01.018845   34720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37183
	I0828 17:21:01.019246   34720 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:01.019743   34720 main.go:141] libmachine: Using API Version  1
	I0828 17:21:01.019762   34720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:01.020091   34720 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:01.020264   34720 main.go:141] libmachine: (ha-240486-m04) Calling .DriverName
	I0828 17:21:01.020456   34720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:01.020474   34720 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHHostname
	I0828 17:21:01.023738   34720 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:01.024200   34720 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:21:01.024222   34720 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:01.025104   34720 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHPort
	I0828 17:21:01.025304   34720 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHKeyPath
	I0828 17:21:01.025486   34720 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHUsername
	I0828 17:21:01.025642   34720 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m04/id_rsa Username:docker}
	I0828 17:21:01.104676   34720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:21:01.117366   34720 status.go:257] ha-240486-m04 status: &{Name:ha-240486-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr: exit status 3 (4.375381054s)

                                                
                                                
-- stdout --
	ha-240486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-240486-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:21:03.240478   34821 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:21:03.240712   34821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:21:03.240721   34821 out.go:358] Setting ErrFile to fd 2...
	I0828 17:21:03.240725   34821 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:21:03.240899   34821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:21:03.241046   34821 out.go:352] Setting JSON to false
	I0828 17:21:03.241070   34821 mustload.go:65] Loading cluster: ha-240486
	I0828 17:21:03.241116   34821 notify.go:220] Checking for updates...
	I0828 17:21:03.241591   34821 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:21:03.241613   34821 status.go:255] checking status of ha-240486 ...
	I0828 17:21:03.242010   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:03.242101   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:03.257014   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0828 17:21:03.257355   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:03.257877   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:03.257904   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:03.258273   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:03.258471   34821 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:21:03.260020   34821 status.go:330] ha-240486 host status = "Running" (err=<nil>)
	I0828 17:21:03.260038   34821 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:21:03.260368   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:03.260418   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:03.275361   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38933
	I0828 17:21:03.275763   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:03.276211   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:03.276230   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:03.276693   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:03.276878   34821 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:21:03.279361   34821 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:03.279732   34821 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:21:03.279751   34821 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:03.279900   34821 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:21:03.280168   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:03.280199   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:03.294464   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45213
	I0828 17:21:03.294814   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:03.295335   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:03.295355   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:03.295718   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:03.295907   34821 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:21:03.296104   34821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:03.296124   34821 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:21:03.298985   34821 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:03.299370   34821 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:21:03.299395   34821 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:03.299598   34821 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:21:03.299770   34821 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:21:03.299938   34821 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:21:03.300109   34821 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:21:03.382296   34821 ssh_runner.go:195] Run: systemctl --version
	I0828 17:21:03.388393   34821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:21:03.402161   34821 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:21:03.402194   34821 api_server.go:166] Checking apiserver status ...
	I0828 17:21:03.402249   34821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:21:03.415019   34821 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup
	W0828 17:21:03.423309   34821 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:21:03.423353   34821 ssh_runner.go:195] Run: ls
	I0828 17:21:03.427225   34821 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:21:03.433161   34821 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:21:03.433183   34821 status.go:422] ha-240486 apiserver status = Running (err=<nil>)
	I0828 17:21:03.433194   34821 status.go:257] ha-240486 status: &{Name:ha-240486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:21:03.433214   34821 status.go:255] checking status of ha-240486-m02 ...
	I0828 17:21:03.433508   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:03.433551   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:03.448255   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37019
	I0828 17:21:03.448659   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:03.449159   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:03.449178   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:03.449476   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:03.449671   34821 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:21:03.451238   34821 status.go:330] ha-240486-m02 host status = "Running" (err=<nil>)
	I0828 17:21:03.451254   34821 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:21:03.451528   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:03.451568   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:03.465902   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
	I0828 17:21:03.466275   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:03.466753   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:03.466773   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:03.467024   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:03.467213   34821 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:21:03.469966   34821 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:21:03.470402   34821 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:21:03.470439   34821 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:21:03.470542   34821 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:21:03.470833   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:03.470873   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:03.485112   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35057
	I0828 17:21:03.485526   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:03.485938   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:03.485957   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:03.486249   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:03.486392   34821 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:21:03.486560   34821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:03.486583   34821 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:21:03.489075   34821 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:21:03.489488   34821 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:21:03.489514   34821 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:21:03.489650   34821 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:21:03.489804   34821 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:21:03.489942   34821 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:21:03.490062   34821 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	W0828 17:21:03.854281   34821 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:21:03.854332   34821 retry.go:31] will retry after 306.520135ms: dial tcp 192.168.39.103:22: connect: no route to host
	W0828 17:21:07.214338   34821 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.103:22: connect: no route to host
	W0828 17:21:07.214431   34821 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	E0828 17:21:07.214454   34821 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:21:07.214465   34821 status.go:257] ha-240486-m02 status: &{Name:ha-240486-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0828 17:21:07.214482   34821 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:21:07.214490   34821 status.go:255] checking status of ha-240486-m03 ...
	I0828 17:21:07.214833   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:07.214874   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:07.230613   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38407
	I0828 17:21:07.231120   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:07.231636   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:07.231667   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:07.231969   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:07.232170   34821 main.go:141] libmachine: (ha-240486-m03) Calling .GetState
	I0828 17:21:07.233918   34821 status.go:330] ha-240486-m03 host status = "Running" (err=<nil>)
	I0828 17:21:07.233935   34821 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:21:07.234266   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:07.234300   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:07.252347   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34983
	I0828 17:21:07.252837   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:07.253286   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:07.253314   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:07.253628   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:07.253813   34821 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:21:07.256843   34821 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:07.257326   34821 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:21:07.257362   34821 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:07.257484   34821 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:21:07.257829   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:07.257875   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:07.273211   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43061
	I0828 17:21:07.273744   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:07.274241   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:07.274270   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:07.274630   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:07.274844   34821 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:21:07.275042   34821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:07.275061   34821 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:21:07.277894   34821 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:07.278503   34821 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:21:07.278535   34821 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:07.278659   34821 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:21:07.278858   34821 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:21:07.279049   34821 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:21:07.279291   34821 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:21:07.365684   34821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:21:07.380811   34821 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:21:07.380844   34821 api_server.go:166] Checking apiserver status ...
	I0828 17:21:07.380874   34821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:21:07.395814   34821 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	W0828 17:21:07.405763   34821 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:21:07.405847   34821 ssh_runner.go:195] Run: ls
	I0828 17:21:07.410384   34821 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:21:07.416321   34821 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:21:07.416347   34821 status.go:422] ha-240486-m03 apiserver status = Running (err=<nil>)
	I0828 17:21:07.416355   34821 status.go:257] ha-240486-m03 status: &{Name:ha-240486-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:21:07.416370   34821 status.go:255] checking status of ha-240486-m04 ...
	I0828 17:21:07.416730   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:07.416774   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:07.431547   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46127
	I0828 17:21:07.432000   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:07.432508   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:07.432530   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:07.432813   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:07.433029   34821 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:21:07.434817   34821 status.go:330] ha-240486-m04 host status = "Running" (err=<nil>)
	I0828 17:21:07.434836   34821 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:21:07.435198   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:07.435266   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:07.450574   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43077
	I0828 17:21:07.450965   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:07.451400   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:07.451419   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:07.451737   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:07.451920   34821 main.go:141] libmachine: (ha-240486-m04) Calling .GetIP
	I0828 17:21:07.454572   34821 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:07.454994   34821 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:21:07.455016   34821 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:07.455159   34821 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:21:07.455566   34821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:07.455607   34821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:07.470504   34821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43851
	I0828 17:21:07.470856   34821 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:07.471313   34821 main.go:141] libmachine: Using API Version  1
	I0828 17:21:07.471341   34821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:07.471699   34821 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:07.471926   34821 main.go:141] libmachine: (ha-240486-m04) Calling .DriverName
	I0828 17:21:07.472147   34821 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:07.472173   34821 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHHostname
	I0828 17:21:07.475217   34821 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:07.475663   34821 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:21:07.475694   34821 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:07.475906   34821 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHPort
	I0828 17:21:07.476056   34821 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHKeyPath
	I0828 17:21:07.476213   34821 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHUsername
	I0828 17:21:07.476357   34821 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m04/id_rsa Username:docker}
	I0828 17:21:07.557591   34821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:21:07.571322   34821 status.go:257] ha-240486-m04 status: &{Name:ha-240486-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr: exit status 3 (3.694020698s)

                                                
                                                
-- stdout --
	ha-240486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-240486-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:21:14.286624   34937 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:21:14.286727   34937 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:21:14.286737   34937 out.go:358] Setting ErrFile to fd 2...
	I0828 17:21:14.286742   34937 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:21:14.286914   34937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:21:14.287121   34937 out.go:352] Setting JSON to false
	I0828 17:21:14.287151   34937 mustload.go:65] Loading cluster: ha-240486
	I0828 17:21:14.287243   34937 notify.go:220] Checking for updates...
	I0828 17:21:14.287588   34937 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:21:14.287603   34937 status.go:255] checking status of ha-240486 ...
	I0828 17:21:14.288014   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:14.288075   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:14.305925   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45329
	I0828 17:21:14.306346   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:14.306992   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:14.307017   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:14.307354   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:14.307543   34937 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:21:14.309171   34937 status.go:330] ha-240486 host status = "Running" (err=<nil>)
	I0828 17:21:14.309188   34937 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:21:14.309481   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:14.309512   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:14.326668   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44251
	I0828 17:21:14.327161   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:14.327710   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:14.327734   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:14.328086   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:14.328306   34937 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:21:14.331275   34937 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:14.331779   34937 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:21:14.331812   34937 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:14.331972   34937 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:21:14.332397   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:14.332457   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:14.349099   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I0828 17:21:14.349598   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:14.350057   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:14.350104   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:14.350461   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:14.350649   34937 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:21:14.350833   34937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:14.350864   34937 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:21:14.353550   34937 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:14.354040   34937 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:21:14.354064   34937 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:14.354231   34937 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:21:14.354395   34937 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:21:14.354537   34937 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:21:14.354665   34937 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:21:14.438906   34937 ssh_runner.go:195] Run: systemctl --version
	I0828 17:21:14.444715   34937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:21:14.460002   34937 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:21:14.460039   34937 api_server.go:166] Checking apiserver status ...
	I0828 17:21:14.460079   34937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:21:14.473704   34937 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup
	W0828 17:21:14.483091   34937 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:21:14.483154   34937 ssh_runner.go:195] Run: ls
	I0828 17:21:14.487204   34937 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:21:14.492922   34937 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:21:14.492947   34937 status.go:422] ha-240486 apiserver status = Running (err=<nil>)
	I0828 17:21:14.492960   34937 status.go:257] ha-240486 status: &{Name:ha-240486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:21:14.492979   34937 status.go:255] checking status of ha-240486-m02 ...
	I0828 17:21:14.493273   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:14.493312   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:14.508622   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42581
	I0828 17:21:14.509030   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:14.509477   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:14.509492   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:14.509764   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:14.509940   34937 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:21:14.511511   34937 status.go:330] ha-240486-m02 host status = "Running" (err=<nil>)
	I0828 17:21:14.511529   34937 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:21:14.511956   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:14.512001   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:14.526831   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44925
	I0828 17:21:14.527284   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:14.527775   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:14.527794   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:14.528100   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:14.528286   34937 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:21:14.531151   34937 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:21:14.531584   34937 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:21:14.531610   34937 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:21:14.531787   34937 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:21:14.532170   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:14.532214   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:14.548838   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I0828 17:21:14.549256   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:14.549795   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:14.549815   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:14.550250   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:14.550479   34937 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:21:14.550710   34937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:14.550733   34937 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:21:14.553585   34937 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:21:14.554109   34937 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:21:14.554141   34937 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:21:14.554365   34937 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:21:14.554566   34937 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:21:14.554737   34937 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:21:14.554891   34937 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	W0828 17:21:17.610315   34937 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.103:22: connect: no route to host
	W0828 17:21:17.610413   34937 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	E0828 17:21:17.610436   34937 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:21:17.610447   34937 status.go:257] ha-240486-m02 status: &{Name:ha-240486-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0828 17:21:17.610469   34937 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.103:22: connect: no route to host
	I0828 17:21:17.610484   34937 status.go:255] checking status of ha-240486-m03 ...
	I0828 17:21:17.610820   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:17.610869   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:17.625676   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40969
	I0828 17:21:17.626105   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:17.626604   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:17.626624   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:17.626949   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:17.627147   34937 main.go:141] libmachine: (ha-240486-m03) Calling .GetState
	I0828 17:21:17.628826   34937 status.go:330] ha-240486-m03 host status = "Running" (err=<nil>)
	I0828 17:21:17.628846   34937 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:21:17.629150   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:17.629182   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:17.643641   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42321
	I0828 17:21:17.643959   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:17.644430   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:17.644450   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:17.644747   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:17.644933   34937 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:21:17.647846   34937 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:17.648339   34937 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:21:17.648364   34937 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:17.648474   34937 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:21:17.648773   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:17.648826   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:17.663492   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44475
	I0828 17:21:17.663825   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:17.664224   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:17.664243   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:17.664522   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:17.664692   34937 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:21:17.664845   34937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:17.664876   34937 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:21:17.667678   34937 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:17.668076   34937 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:21:17.668106   34937 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:17.668297   34937 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:21:17.668494   34937 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:21:17.668654   34937 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:21:17.668767   34937 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:21:17.749229   34937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:21:17.763789   34937 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:21:17.763813   34937 api_server.go:166] Checking apiserver status ...
	I0828 17:21:17.763850   34937 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:21:17.776014   34937 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	W0828 17:21:17.784644   34937 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:21:17.784717   34937 ssh_runner.go:195] Run: ls
	I0828 17:21:17.789234   34937 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:21:17.793701   34937 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:21:17.793720   34937 status.go:422] ha-240486-m03 apiserver status = Running (err=<nil>)
	I0828 17:21:17.793727   34937 status.go:257] ha-240486-m03 status: &{Name:ha-240486-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:21:17.793743   34937 status.go:255] checking status of ha-240486-m04 ...
	I0828 17:21:17.794035   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:17.794088   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:17.808767   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0828 17:21:17.809236   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:17.809847   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:17.809869   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:17.810231   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:17.810410   34937 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:21:17.811794   34937 status.go:330] ha-240486-m04 host status = "Running" (err=<nil>)
	I0828 17:21:17.811812   34937 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:21:17.812082   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:17.812131   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:17.826270   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40131
	I0828 17:21:17.826612   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:17.827035   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:17.827056   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:17.827370   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:17.827546   34937 main.go:141] libmachine: (ha-240486-m04) Calling .GetIP
	I0828 17:21:17.830539   34937 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:17.830973   34937 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:21:17.831008   34937 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:17.831146   34937 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:21:17.831473   34937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:17.831518   34937 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:17.845535   34937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38881
	I0828 17:21:17.845908   34937 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:17.846435   34937 main.go:141] libmachine: Using API Version  1
	I0828 17:21:17.846456   34937 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:17.846819   34937 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:17.847020   34937 main.go:141] libmachine: (ha-240486-m04) Calling .DriverName
	I0828 17:21:17.847211   34937 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:17.847231   34937 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHHostname
	I0828 17:21:17.849756   34937 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:17.850188   34937 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:21:17.850214   34937 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:17.850385   34937 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHPort
	I0828 17:21:17.850564   34937 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHKeyPath
	I0828 17:21:17.850718   34937 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHUsername
	I0828 17:21:17.850845   34937 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m04/id_rsa Username:docker}
	I0828 17:21:17.924918   34937 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:21:17.938037   34937 status.go:257] ha-240486-m04 status: &{Name:ha-240486-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr: exit status 7 (595.339494ms)

                                                
                                                
-- stdout --
	ha-240486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-240486-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:21:25.648686   35073 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:21:25.648962   35073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:21:25.648972   35073 out.go:358] Setting ErrFile to fd 2...
	I0828 17:21:25.648978   35073 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:21:25.649182   35073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:21:25.649356   35073 out.go:352] Setting JSON to false
	I0828 17:21:25.649391   35073 mustload.go:65] Loading cluster: ha-240486
	I0828 17:21:25.649496   35073 notify.go:220] Checking for updates...
	I0828 17:21:25.649798   35073 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:21:25.649814   35073 status.go:255] checking status of ha-240486 ...
	I0828 17:21:25.650260   35073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:25.650338   35073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:25.668213   35073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34837
	I0828 17:21:25.668643   35073 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:25.669284   35073 main.go:141] libmachine: Using API Version  1
	I0828 17:21:25.669312   35073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:25.669673   35073 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:25.669886   35073 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:21:25.671481   35073 status.go:330] ha-240486 host status = "Running" (err=<nil>)
	I0828 17:21:25.671500   35073 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:21:25.671781   35073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:25.671818   35073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:25.687228   35073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35343
	I0828 17:21:25.687672   35073 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:25.688172   35073 main.go:141] libmachine: Using API Version  1
	I0828 17:21:25.688197   35073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:25.688485   35073 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:25.688663   35073 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:21:25.691380   35073 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:25.691744   35073 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:21:25.691773   35073 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:25.691871   35073 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:21:25.692149   35073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:25.692182   35073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:25.706516   35073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44807
	I0828 17:21:25.706898   35073 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:25.707365   35073 main.go:141] libmachine: Using API Version  1
	I0828 17:21:25.707381   35073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:25.707597   35073 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:25.707791   35073 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:21:25.707951   35073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:25.707976   35073 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:21:25.710511   35073 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:25.710801   35073 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:21:25.710838   35073 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:21:25.710949   35073 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:21:25.711099   35073 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:21:25.711227   35073 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:21:25.711417   35073 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:21:25.797793   35073 ssh_runner.go:195] Run: systemctl --version
	I0828 17:21:25.803614   35073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:21:25.817233   35073 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:21:25.817271   35073 api_server.go:166] Checking apiserver status ...
	I0828 17:21:25.817308   35073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:21:25.831303   35073 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup
	W0828 17:21:25.840462   35073 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1126/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:21:25.840521   35073 ssh_runner.go:195] Run: ls
	I0828 17:21:25.845203   35073 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:21:25.849684   35073 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:21:25.849705   35073 status.go:422] ha-240486 apiserver status = Running (err=<nil>)
	I0828 17:21:25.849714   35073 status.go:257] ha-240486 status: &{Name:ha-240486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:21:25.849729   35073 status.go:255] checking status of ha-240486-m02 ...
	I0828 17:21:25.850006   35073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:25.850041   35073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:25.865117   35073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33563
	I0828 17:21:25.865557   35073 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:25.866151   35073 main.go:141] libmachine: Using API Version  1
	I0828 17:21:25.866177   35073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:25.866473   35073 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:25.866649   35073 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:21:25.868112   35073 status.go:330] ha-240486-m02 host status = "Stopped" (err=<nil>)
	I0828 17:21:25.868128   35073 status.go:343] host is not running, skipping remaining checks
	I0828 17:21:25.868134   35073 status.go:257] ha-240486-m02 status: &{Name:ha-240486-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:21:25.868151   35073 status.go:255] checking status of ha-240486-m03 ...
	I0828 17:21:25.868436   35073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:25.868484   35073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:25.883684   35073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I0828 17:21:25.884077   35073 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:25.884637   35073 main.go:141] libmachine: Using API Version  1
	I0828 17:21:25.884661   35073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:25.884983   35073 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:25.885150   35073 main.go:141] libmachine: (ha-240486-m03) Calling .GetState
	I0828 17:21:25.886651   35073 status.go:330] ha-240486-m03 host status = "Running" (err=<nil>)
	I0828 17:21:25.886680   35073 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:21:25.887053   35073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:25.887096   35073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:25.901913   35073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35987
	I0828 17:21:25.902422   35073 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:25.902837   35073 main.go:141] libmachine: Using API Version  1
	I0828 17:21:25.902854   35073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:25.903140   35073 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:25.903289   35073 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:21:25.905806   35073 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:25.906223   35073 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:21:25.906264   35073 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:25.906402   35073 host.go:66] Checking if "ha-240486-m03" exists ...
	I0828 17:21:25.906703   35073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:25.906738   35073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:25.921699   35073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45757
	I0828 17:21:25.922234   35073 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:25.922708   35073 main.go:141] libmachine: Using API Version  1
	I0828 17:21:25.922732   35073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:25.923035   35073 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:25.923189   35073 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:21:25.923367   35073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:25.923385   35073 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:21:25.926026   35073 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:25.926474   35073 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:21:25.926499   35073 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:25.926670   35073 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:21:25.926836   35073 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:21:25.926987   35073 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:21:25.927148   35073 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:21:26.005496   35073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:21:26.019611   35073 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:21:26.019635   35073 api_server.go:166] Checking apiserver status ...
	I0828 17:21:26.019675   35073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:21:26.032726   35073 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	W0828 17:21:26.042564   35073 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:21:26.042629   35073 ssh_runner.go:195] Run: ls
	I0828 17:21:26.046553   35073 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:21:26.052761   35073 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:21:26.052784   35073 status.go:422] ha-240486-m03 apiserver status = Running (err=<nil>)
	I0828 17:21:26.052792   35073 status.go:257] ha-240486-m03 status: &{Name:ha-240486-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:21:26.052806   35073 status.go:255] checking status of ha-240486-m04 ...
	I0828 17:21:26.053124   35073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:26.053164   35073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:26.068239   35073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44793
	I0828 17:21:26.068708   35073 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:26.069183   35073 main.go:141] libmachine: Using API Version  1
	I0828 17:21:26.069207   35073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:26.069477   35073 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:26.069668   35073 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:21:26.071306   35073 status.go:330] ha-240486-m04 host status = "Running" (err=<nil>)
	I0828 17:21:26.071320   35073 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:21:26.071596   35073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:26.071628   35073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:26.086089   35073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0828 17:21:26.086507   35073 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:26.087010   35073 main.go:141] libmachine: Using API Version  1
	I0828 17:21:26.087033   35073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:26.087359   35073 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:26.087548   35073 main.go:141] libmachine: (ha-240486-m04) Calling .GetIP
	I0828 17:21:26.090347   35073 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:26.090759   35073 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:21:26.090784   35073 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:26.090925   35073 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:21:26.091208   35073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:26.091242   35073 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:26.106578   35073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40991
	I0828 17:21:26.107037   35073 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:26.107568   35073 main.go:141] libmachine: Using API Version  1
	I0828 17:21:26.107585   35073 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:26.107891   35073 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:26.108057   35073 main.go:141] libmachine: (ha-240486-m04) Calling .DriverName
	I0828 17:21:26.108251   35073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:21:26.108278   35073 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHHostname
	I0828 17:21:26.111016   35073 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:26.111504   35073 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:21:26.111531   35073 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:26.111681   35073 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHPort
	I0828 17:21:26.111898   35073 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHKeyPath
	I0828 17:21:26.112048   35073 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHUsername
	I0828 17:21:26.112189   35073 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m04/id_rsa Username:docker}
	I0828 17:21:26.188696   35073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:21:26.201688   35073 status.go:257] ha-240486-m04 status: &{Name:ha-240486-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-240486 -n ha-240486
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-240486 logs -n 25: (1.277028676s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486:/home/docker/cp-test_ha-240486-m03_ha-240486.txt                       |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486 sudo cat                                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m03_ha-240486.txt                                 |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m02:/home/docker/cp-test_ha-240486-m03_ha-240486-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m02 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m03_ha-240486-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04:/home/docker/cp-test_ha-240486-m03_ha-240486-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m04 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m03_ha-240486-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp testdata/cp-test.txt                                                | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3516631358/001/cp-test_ha-240486-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486:/home/docker/cp-test_ha-240486-m04_ha-240486.txt                       |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486 sudo cat                                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486.txt                                 |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m02:/home/docker/cp-test_ha-240486-m04_ha-240486-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m02 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03:/home/docker/cp-test_ha-240486-m04_ha-240486-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m03 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-240486 node stop m02 -v=7                                                     | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-240486 node start m02 -v=7                                                    | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 17:13:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 17:13:48.262328   29200 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:13:48.262571   29200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:13:48.262579   29200 out.go:358] Setting ErrFile to fd 2...
	I0828 17:13:48.262584   29200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:13:48.262740   29200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:13:48.263283   29200 out.go:352] Setting JSON to false
	I0828 17:13:48.264133   29200 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3374,"bootTime":1724861854,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 17:13:48.264183   29200 start.go:139] virtualization: kvm guest
	I0828 17:13:48.266113   29200 out.go:177] * [ha-240486] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 17:13:48.267263   29200 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:13:48.267283   29200 notify.go:220] Checking for updates...
	I0828 17:13:48.269420   29200 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:13:48.270714   29200 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:13:48.271818   29200 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:13:48.273007   29200 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 17:13:48.274135   29200 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:13:48.275295   29200 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:13:48.309572   29200 out.go:177] * Using the kvm2 driver based on user configuration
	I0828 17:13:48.310717   29200 start.go:297] selected driver: kvm2
	I0828 17:13:48.310731   29200 start.go:901] validating driver "kvm2" against <nil>
	I0828 17:13:48.310747   29200 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:13:48.311429   29200 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:13:48.311503   29200 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 17:13:48.327499   29200 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 17:13:48.327546   29200 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 17:13:48.327783   29200 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:13:48.327856   29200 cni.go:84] Creating CNI manager for ""
	I0828 17:13:48.327870   29200 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0828 17:13:48.327878   29200 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0828 17:13:48.327941   29200 start.go:340] cluster config:
	{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0828 17:13:48.328042   29200 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:13:48.329722   29200 out.go:177] * Starting "ha-240486" primary control-plane node in "ha-240486" cluster
	I0828 17:13:48.330806   29200 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:13:48.330841   29200 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 17:13:48.330853   29200 cache.go:56] Caching tarball of preloaded images
	I0828 17:13:48.330952   29200 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 17:13:48.330969   29200 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 17:13:48.331293   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:13:48.331317   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json: {Name:mkc18ce99584c5845a4945732a372403690216b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:13:48.331469   29200 start.go:360] acquireMachinesLock for ha-240486: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 17:13:48.331509   29200 start.go:364] duration metric: took 23.247µs to acquireMachinesLock for "ha-240486"
	I0828 17:13:48.331531   29200 start.go:93] Provisioning new machine with config: &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:13:48.331597   29200 start.go:125] createHost starting for "" (driver="kvm2")
	I0828 17:13:48.333046   29200 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 17:13:48.333193   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:13:48.333236   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:13:48.347585   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0828 17:13:48.348066   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:13:48.348580   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:13:48.348607   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:13:48.348949   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:13:48.349129   29200 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:13:48.349265   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:13:48.349448   29200 start.go:159] libmachine.API.Create for "ha-240486" (driver="kvm2")
	I0828 17:13:48.349473   29200 client.go:168] LocalClient.Create starting
	I0828 17:13:48.349513   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 17:13:48.349548   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:13:48.349575   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:13:48.349662   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 17:13:48.349689   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:13:48.349716   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:13:48.349740   29200 main.go:141] libmachine: Running pre-create checks...
	I0828 17:13:48.349751   29200 main.go:141] libmachine: (ha-240486) Calling .PreCreateCheck
	I0828 17:13:48.350123   29200 main.go:141] libmachine: (ha-240486) Calling .GetConfigRaw
	I0828 17:13:48.350527   29200 main.go:141] libmachine: Creating machine...
	I0828 17:13:48.350539   29200 main.go:141] libmachine: (ha-240486) Calling .Create
	I0828 17:13:48.350664   29200 main.go:141] libmachine: (ha-240486) Creating KVM machine...
	I0828 17:13:48.351731   29200 main.go:141] libmachine: (ha-240486) DBG | found existing default KVM network
	I0828 17:13:48.352350   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:48.352226   29223 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014730}
	I0828 17:13:48.352434   29200 main.go:141] libmachine: (ha-240486) DBG | created network xml: 
	I0828 17:13:48.352456   29200 main.go:141] libmachine: (ha-240486) DBG | <network>
	I0828 17:13:48.352467   29200 main.go:141] libmachine: (ha-240486) DBG |   <name>mk-ha-240486</name>
	I0828 17:13:48.352477   29200 main.go:141] libmachine: (ha-240486) DBG |   <dns enable='no'/>
	I0828 17:13:48.352497   29200 main.go:141] libmachine: (ha-240486) DBG |   
	I0828 17:13:48.352517   29200 main.go:141] libmachine: (ha-240486) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0828 17:13:48.352523   29200 main.go:141] libmachine: (ha-240486) DBG |     <dhcp>
	I0828 17:13:48.352529   29200 main.go:141] libmachine: (ha-240486) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0828 17:13:48.352535   29200 main.go:141] libmachine: (ha-240486) DBG |     </dhcp>
	I0828 17:13:48.352540   29200 main.go:141] libmachine: (ha-240486) DBG |   </ip>
	I0828 17:13:48.352545   29200 main.go:141] libmachine: (ha-240486) DBG |   
	I0828 17:13:48.352551   29200 main.go:141] libmachine: (ha-240486) DBG | </network>
	I0828 17:13:48.352559   29200 main.go:141] libmachine: (ha-240486) DBG | 
	I0828 17:13:48.357237   29200 main.go:141] libmachine: (ha-240486) DBG | trying to create private KVM network mk-ha-240486 192.168.39.0/24...
	I0828 17:13:48.421793   29200 main.go:141] libmachine: (ha-240486) DBG | private KVM network mk-ha-240486 192.168.39.0/24 created
	I0828 17:13:48.421848   29200 main.go:141] libmachine: (ha-240486) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486 ...
	I0828 17:13:48.421865   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:48.421778   29223 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:13:48.421890   29200 main.go:141] libmachine: (ha-240486) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 17:13:48.421984   29200 main.go:141] libmachine: (ha-240486) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 17:13:48.660331   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:48.660212   29223 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa...
	I0828 17:13:48.911596   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:48.911454   29223 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/ha-240486.rawdisk...
	I0828 17:13:48.911642   29200 main.go:141] libmachine: (ha-240486) DBG | Writing magic tar header
	I0828 17:13:48.911652   29200 main.go:141] libmachine: (ha-240486) DBG | Writing SSH key tar header
	I0828 17:13:48.911660   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:48.911573   29223 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486 ...
	I0828 17:13:48.911670   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486
	I0828 17:13:48.911715   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486 (perms=drwx------)
	I0828 17:13:48.911741   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 17:13:48.911752   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 17:13:48.911781   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:13:48.911788   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 17:13:48.911800   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 17:13:48.911809   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 17:13:48.911824   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 17:13:48.911839   29200 main.go:141] libmachine: (ha-240486) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 17:13:48.911844   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 17:13:48.911853   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home/jenkins
	I0828 17:13:48.911860   29200 main.go:141] libmachine: (ha-240486) DBG | Checking permissions on dir: /home
	I0828 17:13:48.911865   29200 main.go:141] libmachine: (ha-240486) Creating domain...
	I0828 17:13:48.911871   29200 main.go:141] libmachine: (ha-240486) DBG | Skipping /home - not owner
	I0828 17:13:48.912933   29200 main.go:141] libmachine: (ha-240486) define libvirt domain using xml: 
	I0828 17:13:48.912959   29200 main.go:141] libmachine: (ha-240486) <domain type='kvm'>
	I0828 17:13:48.912981   29200 main.go:141] libmachine: (ha-240486)   <name>ha-240486</name>
	I0828 17:13:48.912994   29200 main.go:141] libmachine: (ha-240486)   <memory unit='MiB'>2200</memory>
	I0828 17:13:48.913022   29200 main.go:141] libmachine: (ha-240486)   <vcpu>2</vcpu>
	I0828 17:13:48.913040   29200 main.go:141] libmachine: (ha-240486)   <features>
	I0828 17:13:48.913048   29200 main.go:141] libmachine: (ha-240486)     <acpi/>
	I0828 17:13:48.913055   29200 main.go:141] libmachine: (ha-240486)     <apic/>
	I0828 17:13:48.913061   29200 main.go:141] libmachine: (ha-240486)     <pae/>
	I0828 17:13:48.913074   29200 main.go:141] libmachine: (ha-240486)     
	I0828 17:13:48.913083   29200 main.go:141] libmachine: (ha-240486)   </features>
	I0828 17:13:48.913094   29200 main.go:141] libmachine: (ha-240486)   <cpu mode='host-passthrough'>
	I0828 17:13:48.913106   29200 main.go:141] libmachine: (ha-240486)   
	I0828 17:13:48.913120   29200 main.go:141] libmachine: (ha-240486)   </cpu>
	I0828 17:13:48.913131   29200 main.go:141] libmachine: (ha-240486)   <os>
	I0828 17:13:48.913139   29200 main.go:141] libmachine: (ha-240486)     <type>hvm</type>
	I0828 17:13:48.913144   29200 main.go:141] libmachine: (ha-240486)     <boot dev='cdrom'/>
	I0828 17:13:48.913151   29200 main.go:141] libmachine: (ha-240486)     <boot dev='hd'/>
	I0828 17:13:48.913157   29200 main.go:141] libmachine: (ha-240486)     <bootmenu enable='no'/>
	I0828 17:13:48.913164   29200 main.go:141] libmachine: (ha-240486)   </os>
	I0828 17:13:48.913177   29200 main.go:141] libmachine: (ha-240486)   <devices>
	I0828 17:13:48.913187   29200 main.go:141] libmachine: (ha-240486)     <disk type='file' device='cdrom'>
	I0828 17:13:48.913217   29200 main.go:141] libmachine: (ha-240486)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/boot2docker.iso'/>
	I0828 17:13:48.913241   29200 main.go:141] libmachine: (ha-240486)       <target dev='hdc' bus='scsi'/>
	I0828 17:13:48.913255   29200 main.go:141] libmachine: (ha-240486)       <readonly/>
	I0828 17:13:48.913269   29200 main.go:141] libmachine: (ha-240486)     </disk>
	I0828 17:13:48.913287   29200 main.go:141] libmachine: (ha-240486)     <disk type='file' device='disk'>
	I0828 17:13:48.913303   29200 main.go:141] libmachine: (ha-240486)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 17:13:48.913318   29200 main.go:141] libmachine: (ha-240486)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/ha-240486.rawdisk'/>
	I0828 17:13:48.913329   29200 main.go:141] libmachine: (ha-240486)       <target dev='hda' bus='virtio'/>
	I0828 17:13:48.913339   29200 main.go:141] libmachine: (ha-240486)     </disk>
	I0828 17:13:48.913347   29200 main.go:141] libmachine: (ha-240486)     <interface type='network'>
	I0828 17:13:48.913360   29200 main.go:141] libmachine: (ha-240486)       <source network='mk-ha-240486'/>
	I0828 17:13:48.913370   29200 main.go:141] libmachine: (ha-240486)       <model type='virtio'/>
	I0828 17:13:48.913387   29200 main.go:141] libmachine: (ha-240486)     </interface>
	I0828 17:13:48.913403   29200 main.go:141] libmachine: (ha-240486)     <interface type='network'>
	I0828 17:13:48.913411   29200 main.go:141] libmachine: (ha-240486)       <source network='default'/>
	I0828 17:13:48.913421   29200 main.go:141] libmachine: (ha-240486)       <model type='virtio'/>
	I0828 17:13:48.913433   29200 main.go:141] libmachine: (ha-240486)     </interface>
	I0828 17:13:48.913444   29200 main.go:141] libmachine: (ha-240486)     <serial type='pty'>
	I0828 17:13:48.913456   29200 main.go:141] libmachine: (ha-240486)       <target port='0'/>
	I0828 17:13:48.913465   29200 main.go:141] libmachine: (ha-240486)     </serial>
	I0828 17:13:48.913494   29200 main.go:141] libmachine: (ha-240486)     <console type='pty'>
	I0828 17:13:48.913512   29200 main.go:141] libmachine: (ha-240486)       <target type='serial' port='0'/>
	I0828 17:13:48.913523   29200 main.go:141] libmachine: (ha-240486)     </console>
	I0828 17:13:48.913533   29200 main.go:141] libmachine: (ha-240486)     <rng model='virtio'>
	I0828 17:13:48.913546   29200 main.go:141] libmachine: (ha-240486)       <backend model='random'>/dev/random</backend>
	I0828 17:13:48.913564   29200 main.go:141] libmachine: (ha-240486)     </rng>
	I0828 17:13:48.913573   29200 main.go:141] libmachine: (ha-240486)     
	I0828 17:13:48.913584   29200 main.go:141] libmachine: (ha-240486)     
	I0828 17:13:48.913592   29200 main.go:141] libmachine: (ha-240486)   </devices>
	I0828 17:13:48.913601   29200 main.go:141] libmachine: (ha-240486) </domain>
	I0828 17:13:48.913613   29200 main.go:141] libmachine: (ha-240486) 
	I0828 17:13:48.918440   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:69:65:7c in network default
	I0828 17:13:48.919008   29200 main.go:141] libmachine: (ha-240486) Ensuring networks are active...
	I0828 17:13:48.919027   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:48.919726   29200 main.go:141] libmachine: (ha-240486) Ensuring network default is active
	I0828 17:13:48.920030   29200 main.go:141] libmachine: (ha-240486) Ensuring network mk-ha-240486 is active
	I0828 17:13:48.920468   29200 main.go:141] libmachine: (ha-240486) Getting domain xml...
	I0828 17:13:48.921207   29200 main.go:141] libmachine: (ha-240486) Creating domain...
	I0828 17:13:50.102460   29200 main.go:141] libmachine: (ha-240486) Waiting to get IP...
	I0828 17:13:50.103099   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:50.103421   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:50.103472   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:50.103409   29223 retry.go:31] will retry after 253.535151ms: waiting for machine to come up
	I0828 17:13:50.359134   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:50.359644   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:50.359687   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:50.359620   29223 retry.go:31] will retry after 316.872772ms: waiting for machine to come up
	I0828 17:13:50.678183   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:50.678576   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:50.678598   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:50.678528   29223 retry.go:31] will retry after 461.024783ms: waiting for machine to come up
	I0828 17:13:51.140747   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:51.141160   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:51.141187   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:51.141110   29223 retry.go:31] will retry after 397.899332ms: waiting for machine to come up
	I0828 17:13:51.540611   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:51.540944   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:51.540970   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:51.540900   29223 retry.go:31] will retry after 522.638296ms: waiting for machine to come up
	I0828 17:13:52.064600   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:52.064967   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:52.064991   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:52.064946   29223 retry.go:31] will retry after 589.769235ms: waiting for machine to come up
	I0828 17:13:52.656653   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:52.657074   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:52.657113   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:52.657020   29223 retry.go:31] will retry after 753.231977ms: waiting for machine to come up
	I0828 17:13:53.411846   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:53.412189   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:53.412210   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:53.412163   29223 retry.go:31] will retry after 954.837864ms: waiting for machine to come up
	I0828 17:13:54.368491   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:54.368908   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:54.368931   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:54.368870   29223 retry.go:31] will retry after 1.471935642s: waiting for machine to come up
	I0828 17:13:55.841866   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:55.842270   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:55.842294   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:55.842208   29223 retry.go:31] will retry after 2.247459315s: waiting for machine to come up
	I0828 17:13:58.092692   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:13:58.093213   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:13:58.093266   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:13:58.093202   29223 retry.go:31] will retry after 2.877612232s: waiting for machine to come up
	I0828 17:14:00.974142   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:00.974458   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:14:00.974476   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:14:00.974435   29223 retry.go:31] will retry after 3.170605692s: waiting for machine to come up
	I0828 17:14:04.146350   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:04.146852   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find current IP address of domain ha-240486 in network mk-ha-240486
	I0828 17:14:04.146877   29200 main.go:141] libmachine: (ha-240486) DBG | I0828 17:14:04.146813   29223 retry.go:31] will retry after 3.284470654s: waiting for machine to come up
	I0828 17:14:07.435035   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.435406   29200 main.go:141] libmachine: (ha-240486) Found IP for machine: 192.168.39.227
	I0828 17:14:07.435435   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has current primary IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.435444   29200 main.go:141] libmachine: (ha-240486) Reserving static IP address...
	I0828 17:14:07.435821   29200 main.go:141] libmachine: (ha-240486) DBG | unable to find host DHCP lease matching {name: "ha-240486", mac: "52:54:00:3e:e0:a1", ip: "192.168.39.227"} in network mk-ha-240486
	I0828 17:14:07.506358   29200 main.go:141] libmachine: (ha-240486) DBG | Getting to WaitForSSH function...
	I0828 17:14:07.506380   29200 main.go:141] libmachine: (ha-240486) Reserved static IP address: 192.168.39.227
	I0828 17:14:07.506390   29200 main.go:141] libmachine: (ha-240486) Waiting for SSH to be available...
	I0828 17:14:07.508836   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.509214   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:07.509240   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.509354   29200 main.go:141] libmachine: (ha-240486) DBG | Using SSH client type: external
	I0828 17:14:07.509374   29200 main.go:141] libmachine: (ha-240486) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa (-rw-------)
	I0828 17:14:07.509408   29200 main.go:141] libmachine: (ha-240486) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.227 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 17:14:07.509427   29200 main.go:141] libmachine: (ha-240486) DBG | About to run SSH command:
	I0828 17:14:07.509439   29200 main.go:141] libmachine: (ha-240486) DBG | exit 0
	I0828 17:14:07.633862   29200 main.go:141] libmachine: (ha-240486) DBG | SSH cmd err, output: <nil>: 
	I0828 17:14:07.634121   29200 main.go:141] libmachine: (ha-240486) KVM machine creation complete!
	I0828 17:14:07.634446   29200 main.go:141] libmachine: (ha-240486) Calling .GetConfigRaw
	I0828 17:14:07.635133   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:07.635456   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:07.635666   29200 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 17:14:07.635683   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:14:07.636928   29200 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 17:14:07.636943   29200 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 17:14:07.636949   29200 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 17:14:07.636955   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:07.639165   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.639485   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:07.639516   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.639625   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:07.639802   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.639938   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.640074   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:07.640191   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:07.640420   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:07.640433   29200 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 17:14:07.745224   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:14:07.745254   29200 main.go:141] libmachine: Detecting the provisioner...
	I0828 17:14:07.745263   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:07.747753   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.748023   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:07.748050   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.748171   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:07.748341   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.748522   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.748674   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:07.748855   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:07.749022   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:07.749032   29200 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 17:14:07.854381   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 17:14:07.854464   29200 main.go:141] libmachine: found compatible host: buildroot
	I0828 17:14:07.854473   29200 main.go:141] libmachine: Provisioning with buildroot...
	I0828 17:14:07.854480   29200 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:14:07.854702   29200 buildroot.go:166] provisioning hostname "ha-240486"
	I0828 17:14:07.854716   29200 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:14:07.854879   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:07.857404   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.857710   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:07.857744   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.857904   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:07.858065   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.858281   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.858407   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:07.858556   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:07.858706   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:07.858717   29200 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-240486 && echo "ha-240486" | sudo tee /etc/hostname
	I0828 17:14:07.975330   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-240486
	
	I0828 17:14:07.975405   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:07.977872   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.978216   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:07.978243   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:07.978429   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:07.978601   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.978743   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:07.978859   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:07.979002   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:07.979203   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:07.979220   29200 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-240486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-240486/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-240486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:14:08.094018   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:14:08.094049   29200 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 17:14:08.094137   29200 buildroot.go:174] setting up certificates
	I0828 17:14:08.094169   29200 provision.go:84] configureAuth start
	I0828 17:14:08.094188   29200 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:14:08.094498   29200 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:14:08.097547   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.097924   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.097960   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.098127   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.100405   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.100666   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.100703   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.100814   29200 provision.go:143] copyHostCerts
	I0828 17:14:08.100848   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:14:08.100884   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 17:14:08.100906   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:14:08.100984   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 17:14:08.101076   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:14:08.101098   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 17:14:08.101103   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:14:08.101129   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 17:14:08.101176   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:14:08.101195   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 17:14:08.101202   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:14:08.101225   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 17:14:08.101277   29200 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.ha-240486 san=[127.0.0.1 192.168.39.227 ha-240486 localhost minikube]
	I0828 17:14:08.164479   29200 provision.go:177] copyRemoteCerts
	I0828 17:14:08.164536   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:14:08.164559   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.167061   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.167333   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.167359   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.167512   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.167692   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.167857   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.168015   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:08.251718   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0828 17:14:08.251814   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 17:14:08.275840   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0828 17:14:08.275911   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0828 17:14:08.299681   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0828 17:14:08.299739   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 17:14:08.322881   29200 provision.go:87] duration metric: took 228.695209ms to configureAuth
	I0828 17:14:08.322904   29200 buildroot.go:189] setting minikube options for container-runtime
	I0828 17:14:08.323068   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:14:08.323130   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.325441   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.325777   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.325803   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.326012   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.326217   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.326434   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.326581   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.326771   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:08.326921   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:08.326935   29200 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 17:14:08.546447   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 17:14:08.546479   29200 main.go:141] libmachine: Checking connection to Docker...
	I0828 17:14:08.546487   29200 main.go:141] libmachine: (ha-240486) Calling .GetURL
	I0828 17:14:08.547669   29200 main.go:141] libmachine: (ha-240486) DBG | Using libvirt version 6000000
	I0828 17:14:08.549610   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.549959   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.549990   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.550162   29200 main.go:141] libmachine: Docker is up and running!
	I0828 17:14:08.550175   29200 main.go:141] libmachine: Reticulating splines...
	I0828 17:14:08.550183   29200 client.go:171] duration metric: took 20.200699308s to LocalClient.Create
	I0828 17:14:08.550208   29200 start.go:167] duration metric: took 20.200759521s to libmachine.API.Create "ha-240486"
	I0828 17:14:08.550221   29200 start.go:293] postStartSetup for "ha-240486" (driver="kvm2")
	I0828 17:14:08.550234   29200 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:14:08.550256   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:08.550498   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:14:08.550522   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.552712   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.553058   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.553083   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.553226   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.553400   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.553556   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.553707   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:08.636478   29200 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:14:08.640579   29200 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 17:14:08.640614   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 17:14:08.640678   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 17:14:08.640748   29200 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 17:14:08.640757   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /etc/ssl/certs/175282.pem
	I0828 17:14:08.640843   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 17:14:08.649972   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:14:08.671793   29200 start.go:296] duration metric: took 121.561129ms for postStartSetup
	I0828 17:14:08.671838   29200 main.go:141] libmachine: (ha-240486) Calling .GetConfigRaw
	I0828 17:14:08.672501   29200 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:14:08.675302   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.675557   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.675583   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.675798   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:14:08.676012   29200 start.go:128] duration metric: took 20.344403229s to createHost
	I0828 17:14:08.676035   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.677935   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.678241   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.678266   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.678421   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.678608   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.678749   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.678881   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.679017   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:08.679172   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:14:08.679182   29200 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 17:14:08.786455   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724865248.759308588
	
	I0828 17:14:08.786484   29200 fix.go:216] guest clock: 1724865248.759308588
	I0828 17:14:08.786512   29200 fix.go:229] Guest: 2024-08-28 17:14:08.759308588 +0000 UTC Remote: 2024-08-28 17:14:08.676025288 +0000 UTC m=+20.448521902 (delta=83.2833ms)
	I0828 17:14:08.786570   29200 fix.go:200] guest clock delta is within tolerance: 83.2833ms
	I0828 17:14:08.786578   29200 start.go:83] releasing machines lock for "ha-240486", held for 20.455057608s
	I0828 17:14:08.786605   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:08.786890   29200 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:14:08.789379   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.789739   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.789765   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.789940   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:08.790393   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:08.790564   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:08.790650   29200 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:14:08.790699   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.790756   29200 ssh_runner.go:195] Run: cat /version.json
	I0828 17:14:08.790792   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:08.793063   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.793216   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.793339   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.793365   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.793460   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.793594   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:08.793615   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:08.793618   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.793774   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:08.793799   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.793972   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:08.793986   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:08.794148   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:08.794330   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:08.910458   29200 ssh_runner.go:195] Run: systemctl --version
	I0828 17:14:08.916156   29200 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 17:14:09.069065   29200 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 17:14:09.076762   29200 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 17:14:09.076828   29200 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:14:09.091408   29200 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 17:14:09.091429   29200 start.go:495] detecting cgroup driver to use...
	I0828 17:14:09.091489   29200 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 17:14:09.106472   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 17:14:09.119494   29200 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:14:09.119550   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:14:09.132644   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:14:09.145357   29200 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:14:09.251477   29200 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:14:09.388289   29200 docker.go:233] disabling docker service ...
	I0828 17:14:09.388378   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:14:09.402234   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:14:09.414586   29200 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:14:09.544027   29200 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:14:09.673320   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:14:09.686322   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:14:09.703741   29200 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 17:14:09.703791   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.713385   29200 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 17:14:09.713448   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.723776   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.736981   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.747413   29200 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:14:09.757150   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.767031   29200 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.782769   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:09.792250   29200 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:14:09.800963   29200 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 17:14:09.801007   29200 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 17:14:09.813554   29200 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:14:09.822146   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:14:09.947026   29200 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 17:14:10.034549   29200 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 17:14:10.034618   29200 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 17:14:10.039646   29200 start.go:563] Will wait 60s for crictl version
	I0828 17:14:10.039710   29200 ssh_runner.go:195] Run: which crictl
	I0828 17:14:10.043145   29200 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:14:10.080667   29200 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 17:14:10.080736   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:14:10.107279   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:14:10.140331   29200 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 17:14:10.141540   29200 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:14:10.144150   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:10.144534   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:10.144558   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:10.144717   29200 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 17:14:10.148719   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:14:10.161645   29200 kubeadm.go:883] updating cluster {Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 17:14:10.161744   29200 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:14:10.161791   29200 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:14:10.193008   29200 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 17:14:10.193077   29200 ssh_runner.go:195] Run: which lz4
	I0828 17:14:10.196704   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0828 17:14:10.196806   29200 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 17:14:10.200474   29200 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 17:14:10.200512   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 17:14:11.366594   29200 crio.go:462] duration metric: took 1.169821448s to copy over tarball
	I0828 17:14:11.366678   29200 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 17:14:13.336766   29200 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.970049905s)
	I0828 17:14:13.336812   29200 crio.go:469] duration metric: took 1.970174251s to extract the tarball
	I0828 17:14:13.336823   29200 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 17:14:13.372537   29200 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:14:13.414366   29200 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 17:14:13.414391   29200 cache_images.go:84] Images are preloaded, skipping loading
	I0828 17:14:13.414398   29200 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0 crio true true} ...
	I0828 17:14:13.414499   29200 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-240486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:14:13.414566   29200 ssh_runner.go:195] Run: crio config
	I0828 17:14:13.461771   29200 cni.go:84] Creating CNI manager for ""
	I0828 17:14:13.461787   29200 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0828 17:14:13.461797   29200 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 17:14:13.461819   29200 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-240486 NodeName:ha-240486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 17:14:13.461952   29200 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-240486"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 17:14:13.461974   29200 kube-vip.go:115] generating kube-vip config ...
	I0828 17:14:13.462016   29200 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0828 17:14:13.478842   29200 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0828 17:14:13.478947   29200 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0828 17:14:13.479005   29200 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 17:14:13.488191   29200 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 17:14:13.488260   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0828 17:14:13.497268   29200 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0828 17:14:13.512417   29200 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:14:13.527562   29200 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0828 17:14:13.542655   29200 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0828 17:14:13.557823   29200 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0828 17:14:13.561389   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:14:13.572690   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:14:13.688585   29200 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:14:13.704412   29200 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486 for IP: 192.168.39.227
	I0828 17:14:13.704444   29200 certs.go:194] generating shared ca certs ...
	I0828 17:14:13.704461   29200 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:13.704627   29200 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 17:14:13.704668   29200 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 17:14:13.704676   29200 certs.go:256] generating profile certs ...
	I0828 17:14:13.704733   29200 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key
	I0828 17:14:13.704749   29200 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt with IP's: []
	I0828 17:14:13.831682   29200 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt ...
	I0828 17:14:13.831708   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt: {Name:mk66759107edf8d0bebbbe02121a430074fdfe10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:13.831896   29200 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key ...
	I0828 17:14:13.831911   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key: {Name:mkf62adf398d03ad935437fbd19c6e593dd9b953 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:13.831994   29200 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.e15d05bd
	I0828 17:14:13.832008   29200 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.e15d05bd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.254]
	I0828 17:14:14.103313   29200 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.e15d05bd ...
	I0828 17:14:14.103342   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.e15d05bd: {Name:mkb51258da04d783bb7cf6695912752804f8bdd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:14.103493   29200 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.e15d05bd ...
	I0828 17:14:14.103505   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.e15d05bd: {Name:mkd920f9d9856108b94330ec655e07e394a548c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:14.103572   29200 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.e15d05bd -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt
	I0828 17:14:14.103669   29200 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.e15d05bd -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key
	I0828 17:14:14.103723   29200 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key
	I0828 17:14:14.103763   29200 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt with IP's: []
	I0828 17:14:14.189744   29200 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt ...
	I0828 17:14:14.189777   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt: {Name:mkf86a5e9ba97890f5f5fab87c5e67448d427d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:14.189928   29200 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key ...
	I0828 17:14:14.189939   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key: {Name:mkffc092eac46d4d3d8650d02f5802b03fae0e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:14.190003   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0828 17:14:14.190020   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0828 17:14:14.190030   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0828 17:14:14.190043   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0828 17:14:14.190054   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0828 17:14:14.190069   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0828 17:14:14.190107   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0828 17:14:14.190124   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0828 17:14:14.190179   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 17:14:14.190217   29200 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 17:14:14.190227   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:14:14.190250   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 17:14:14.190273   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:14:14.190294   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 17:14:14.190337   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:14:14.190363   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:14.190378   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem -> /usr/share/ca-certificates/17528.pem
	I0828 17:14:14.190404   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /usr/share/ca-certificates/175282.pem
	I0828 17:14:14.190946   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:14:14.214763   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 17:14:14.236519   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:14:14.258332   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:14:14.283248   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 17:14:14.307773   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 17:14:14.332795   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:14:14.355311   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 17:14:14.377602   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:14:14.399309   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 17:14:14.421842   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 17:14:14.445679   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 17:14:14.478814   29200 ssh_runner.go:195] Run: openssl version
	I0828 17:14:14.485538   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:14:14.502012   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:14.506266   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:14.506327   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:14.511879   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:14:14.521746   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 17:14:14.532498   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 17:14:14.536782   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:14:14.536832   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 17:14:14.542304   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 17:14:14.552354   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 17:14:14.562443   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 17:14:14.566315   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:14:14.566367   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 17:14:14.571462   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 17:14:14.581095   29200 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:14:14.584646   29200 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 17:14:14.584705   29200 kubeadm.go:392] StartCluster: {Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:14:14.584806   29200 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 17:14:14.584864   29200 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 17:14:14.622690   29200 cri.go:89] found id: ""
	I0828 17:14:14.622760   29200 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 17:14:14.632258   29200 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 17:14:14.641430   29200 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 17:14:14.650444   29200 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 17:14:14.650458   29200 kubeadm.go:157] found existing configuration files:
	
	I0828 17:14:14.650509   29200 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 17:14:14.658834   29200 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 17:14:14.658896   29200 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 17:14:14.667393   29200 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 17:14:14.675746   29200 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 17:14:14.675791   29200 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 17:14:14.684492   29200 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 17:14:14.692555   29200 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 17:14:14.692595   29200 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 17:14:14.700992   29200 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 17:14:14.709128   29200 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 17:14:14.709169   29200 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 17:14:14.717497   29200 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 17:14:14.808101   29200 kubeadm.go:310] W0828 17:14:14.788966     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 17:14:14.808801   29200 kubeadm.go:310] W0828 17:14:14.789782     836 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 17:14:14.908998   29200 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 17:14:25.023761   29200 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 17:14:25.023809   29200 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 17:14:25.023885   29200 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 17:14:25.023985   29200 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 17:14:25.024061   29200 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 17:14:25.024155   29200 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 17:14:25.025534   29200 out.go:235]   - Generating certificates and keys ...
	I0828 17:14:25.025609   29200 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 17:14:25.025669   29200 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 17:14:25.025738   29200 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 17:14:25.025816   29200 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 17:14:25.025904   29200 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 17:14:25.025979   29200 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 17:14:25.026045   29200 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 17:14:25.026225   29200 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-240486 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0828 17:14:25.026305   29200 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 17:14:25.026486   29200 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-240486 localhost] and IPs [192.168.39.227 127.0.0.1 ::1]
	I0828 17:14:25.026580   29200 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 17:14:25.026673   29200 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 17:14:25.026739   29200 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 17:14:25.026814   29200 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 17:14:25.026888   29200 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 17:14:25.026969   29200 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 17:14:25.027053   29200 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 17:14:25.027142   29200 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 17:14:25.027221   29200 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 17:14:25.027322   29200 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 17:14:25.027386   29200 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 17:14:25.028852   29200 out.go:235]   - Booting up control plane ...
	I0828 17:14:25.028955   29200 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 17:14:25.029069   29200 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 17:14:25.029133   29200 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 17:14:25.029257   29200 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 17:14:25.029365   29200 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 17:14:25.029420   29200 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 17:14:25.029536   29200 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 17:14:25.029672   29200 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 17:14:25.029743   29200 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.178877ms
	I0828 17:14:25.029843   29200 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 17:14:25.029933   29200 kubeadm.go:310] [api-check] The API server is healthy after 6.085182884s
	I0828 17:14:25.030051   29200 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 17:14:25.030210   29200 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 17:14:25.030297   29200 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 17:14:25.030528   29200 kubeadm.go:310] [mark-control-plane] Marking the node ha-240486 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 17:14:25.030605   29200 kubeadm.go:310] [bootstrap-token] Using token: tx0kpz.xk8c8jbbyazjlymg
	I0828 17:14:25.031867   29200 out.go:235]   - Configuring RBAC rules ...
	I0828 17:14:25.031978   29200 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 17:14:25.032069   29200 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 17:14:25.032254   29200 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 17:14:25.032417   29200 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 17:14:25.032543   29200 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 17:14:25.032621   29200 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 17:14:25.032724   29200 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 17:14:25.032761   29200 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 17:14:25.032808   29200 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 17:14:25.032819   29200 kubeadm.go:310] 
	I0828 17:14:25.032870   29200 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 17:14:25.032880   29200 kubeadm.go:310] 
	I0828 17:14:25.032968   29200 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 17:14:25.032974   29200 kubeadm.go:310] 
	I0828 17:14:25.032995   29200 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 17:14:25.033047   29200 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 17:14:25.033092   29200 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 17:14:25.033098   29200 kubeadm.go:310] 
	I0828 17:14:25.033145   29200 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 17:14:25.033151   29200 kubeadm.go:310] 
	I0828 17:14:25.033190   29200 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 17:14:25.033201   29200 kubeadm.go:310] 
	I0828 17:14:25.033241   29200 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 17:14:25.033303   29200 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 17:14:25.033362   29200 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 17:14:25.033368   29200 kubeadm.go:310] 
	I0828 17:14:25.033439   29200 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 17:14:25.033506   29200 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 17:14:25.033512   29200 kubeadm.go:310] 
	I0828 17:14:25.033577   29200 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tx0kpz.xk8c8jbbyazjlymg \
	I0828 17:14:25.033693   29200 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 17:14:25.033725   29200 kubeadm.go:310] 	--control-plane 
	I0828 17:14:25.033732   29200 kubeadm.go:310] 
	I0828 17:14:25.033799   29200 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 17:14:25.033806   29200 kubeadm.go:310] 
	I0828 17:14:25.033885   29200 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tx0kpz.xk8c8jbbyazjlymg \
	I0828 17:14:25.033980   29200 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 17:14:25.033990   29200 cni.go:84] Creating CNI manager for ""
	I0828 17:14:25.033995   29200 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0828 17:14:25.036206   29200 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0828 17:14:25.037364   29200 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0828 17:14:25.042564   29200 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0828 17:14:25.042579   29200 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0828 17:14:25.062153   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0828 17:14:25.440327   29200 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 17:14:25.440444   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:25.440452   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-240486 minikube.k8s.io/updated_at=2024_08_28T17_14_25_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=ha-240486 minikube.k8s.io/primary=true
	I0828 17:14:25.460051   29200 ops.go:34] apiserver oom_adj: -16
	I0828 17:14:25.664735   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:26.165116   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:26.664899   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:27.165714   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:27.665331   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:28.164925   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:28.665631   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 17:14:28.775282   29200 kubeadm.go:1113] duration metric: took 3.334915766s to wait for elevateKubeSystemPrivileges
	I0828 17:14:28.775319   29200 kubeadm.go:394] duration metric: took 14.190618055s to StartCluster
	I0828 17:14:28.775342   29200 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:28.775423   29200 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:14:28.776337   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:28.776575   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0828 17:14:28.776597   29200 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:14:28.776633   29200 start.go:241] waiting for startup goroutines ...
	I0828 17:14:28.776650   29200 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 17:14:28.776743   29200 addons.go:69] Setting storage-provisioner=true in profile "ha-240486"
	I0828 17:14:28.776779   29200 addons.go:234] Setting addon storage-provisioner=true in "ha-240486"
	I0828 17:14:28.776813   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:14:28.776822   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:14:28.776746   29200 addons.go:69] Setting default-storageclass=true in profile "ha-240486"
	I0828 17:14:28.776888   29200 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-240486"
	I0828 17:14:28.777244   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:28.777291   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:28.777318   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:28.777353   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:28.792012   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I0828 17:14:28.792453   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43091
	I0828 17:14:28.792518   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:28.792801   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:28.793018   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:28.793044   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:28.793303   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:28.793326   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:28.793388   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:28.793583   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:14:28.793636   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:28.794132   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:28.794170   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:28.795658   29200 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:14:28.795927   29200 kapi.go:59] client config for ha-240486: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt", KeyFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key", CAFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 17:14:28.796459   29200 cert_rotation.go:140] Starting client certificate rotation controller
	I0828 17:14:28.796712   29200 addons.go:234] Setting addon default-storageclass=true in "ha-240486"
	I0828 17:14:28.796745   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:14:28.796992   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:28.797023   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:28.811222   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38339
	I0828 17:14:28.811260   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44965
	I0828 17:14:28.811638   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:28.811655   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:28.812105   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:28.812120   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:28.812136   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:28.812162   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:28.812509   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:28.812516   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:28.812697   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:14:28.813066   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:28.813095   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:28.814562   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:28.816906   29200 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 17:14:28.818305   29200 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 17:14:28.818326   29200 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 17:14:28.818343   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:28.821529   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:28.821960   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:28.821993   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:28.822158   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:28.822365   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:28.822513   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:28.822663   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:28.827776   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33745
	I0828 17:14:28.828157   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:28.828558   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:28.828580   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:28.828869   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:28.829066   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:14:28.830642   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:28.830828   29200 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 17:14:28.830841   29200 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 17:14:28.830853   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:28.833514   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:28.833871   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:28.833900   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:28.833991   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:28.834154   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:28.834243   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:28.834365   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:28.985091   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0828 17:14:29.043973   29200 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 17:14:29.044367   29200 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 17:14:29.656257   29200 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0828 17:14:29.825339   29200 main.go:141] libmachine: Making call to close driver server
	I0828 17:14:29.825362   29200 main.go:141] libmachine: (ha-240486) Calling .Close
	I0828 17:14:29.825475   29200 main.go:141] libmachine: Making call to close driver server
	I0828 17:14:29.825497   29200 main.go:141] libmachine: (ha-240486) Calling .Close
	I0828 17:14:29.825690   29200 main.go:141] libmachine: (ha-240486) DBG | Closing plugin on server side
	I0828 17:14:29.825713   29200 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:14:29.825726   29200 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:14:29.825739   29200 main.go:141] libmachine: Making call to close driver server
	I0828 17:14:29.825771   29200 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:14:29.825789   29200 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:14:29.825803   29200 main.go:141] libmachine: Making call to close driver server
	I0828 17:14:29.825816   29200 main.go:141] libmachine: (ha-240486) Calling .Close
	I0828 17:14:29.825859   29200 main.go:141] libmachine: (ha-240486) Calling .Close
	I0828 17:14:29.826052   29200 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:14:29.826065   29200 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:14:29.826277   29200 main.go:141] libmachine: (ha-240486) DBG | Closing plugin on server side
	I0828 17:14:29.826302   29200 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:14:29.826332   29200 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:14:29.826429   29200 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0828 17:14:29.826449   29200 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0828 17:14:29.826561   29200 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0828 17:14:29.826571   29200 round_trippers.go:469] Request Headers:
	I0828 17:14:29.826582   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:14:29.826595   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:14:29.837409   29200 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0828 17:14:29.839580   29200 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0828 17:14:29.839600   29200 round_trippers.go:469] Request Headers:
	I0828 17:14:29.839611   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:14:29.839617   29200 round_trippers.go:473]     Content-Type: application/json
	I0828 17:14:29.839621   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:14:29.842812   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:14:29.842985   29200 main.go:141] libmachine: Making call to close driver server
	I0828 17:14:29.843004   29200 main.go:141] libmachine: (ha-240486) Calling .Close
	I0828 17:14:29.843253   29200 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:14:29.843272   29200 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:14:29.845064   29200 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0828 17:14:29.846438   29200 addons.go:510] duration metric: took 1.069792822s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0828 17:14:29.846474   29200 start.go:246] waiting for cluster config update ...
	I0828 17:14:29.846489   29200 start.go:255] writing updated cluster config ...
	I0828 17:14:29.848495   29200 out.go:201] 
	I0828 17:14:29.850555   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:14:29.850650   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:14:29.852115   29200 out.go:177] * Starting "ha-240486-m02" control-plane node in "ha-240486" cluster
	I0828 17:14:29.853234   29200 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:14:29.853251   29200 cache.go:56] Caching tarball of preloaded images
	I0828 17:14:29.853338   29200 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 17:14:29.853356   29200 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 17:14:29.853422   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:14:29.853569   29200 start.go:360] acquireMachinesLock for ha-240486-m02: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 17:14:29.853609   29200 start.go:364] duration metric: took 22.687µs to acquireMachinesLock for "ha-240486-m02"
	I0828 17:14:29.853627   29200 start.go:93] Provisioning new machine with config: &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:14:29.853695   29200 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0828 17:14:29.855387   29200 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 17:14:29.855464   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:29.855496   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:29.870016   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39189
	I0828 17:14:29.870414   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:29.870871   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:29.870896   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:29.871164   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:29.871372   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetMachineName
	I0828 17:14:29.871496   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:29.871632   29200 start.go:159] libmachine.API.Create for "ha-240486" (driver="kvm2")
	I0828 17:14:29.871662   29200 client.go:168] LocalClient.Create starting
	I0828 17:14:29.871698   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 17:14:29.871740   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:14:29.871761   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:14:29.871824   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 17:14:29.871866   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:14:29.871884   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:14:29.871909   29200 main.go:141] libmachine: Running pre-create checks...
	I0828 17:14:29.871921   29200 main.go:141] libmachine: (ha-240486-m02) Calling .PreCreateCheck
	I0828 17:14:29.872081   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetConfigRaw
	I0828 17:14:29.872436   29200 main.go:141] libmachine: Creating machine...
	I0828 17:14:29.872450   29200 main.go:141] libmachine: (ha-240486-m02) Calling .Create
	I0828 17:14:29.872570   29200 main.go:141] libmachine: (ha-240486-m02) Creating KVM machine...
	I0828 17:14:29.873897   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found existing default KVM network
	I0828 17:14:29.873988   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found existing private KVM network mk-ha-240486
	I0828 17:14:29.874197   29200 main.go:141] libmachine: (ha-240486-m02) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02 ...
	I0828 17:14:29.874225   29200 main.go:141] libmachine: (ha-240486-m02) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 17:14:29.874237   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:29.874151   29549 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:14:29.874289   29200 main.go:141] libmachine: (ha-240486-m02) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 17:14:30.101165   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:30.101014   29549 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa...
	I0828 17:14:30.262160   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:30.261990   29549 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/ha-240486-m02.rawdisk...
	I0828 17:14:30.262195   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Writing magic tar header
	I0828 17:14:30.262219   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Writing SSH key tar header
	I0828 17:14:30.262233   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:30.262132   29549 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02 ...
	I0828 17:14:30.262248   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02
	I0828 17:14:30.262263   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 17:14:30.262278   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:14:30.262292   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02 (perms=drwx------)
	I0828 17:14:30.262309   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 17:14:30.262324   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 17:14:30.262335   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 17:14:30.262350   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 17:14:30.262361   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home/jenkins
	I0828 17:14:30.262374   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 17:14:30.262387   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Checking permissions on dir: /home
	I0828 17:14:30.262401   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 17:14:30.262421   29200 main.go:141] libmachine: (ha-240486-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 17:14:30.262433   29200 main.go:141] libmachine: (ha-240486-m02) Creating domain...
	I0828 17:14:30.262470   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Skipping /home - not owner
	I0828 17:14:30.263327   29200 main.go:141] libmachine: (ha-240486-m02) define libvirt domain using xml: 
	I0828 17:14:30.263346   29200 main.go:141] libmachine: (ha-240486-m02) <domain type='kvm'>
	I0828 17:14:30.263362   29200 main.go:141] libmachine: (ha-240486-m02)   <name>ha-240486-m02</name>
	I0828 17:14:30.263371   29200 main.go:141] libmachine: (ha-240486-m02)   <memory unit='MiB'>2200</memory>
	I0828 17:14:30.263383   29200 main.go:141] libmachine: (ha-240486-m02)   <vcpu>2</vcpu>
	I0828 17:14:30.263392   29200 main.go:141] libmachine: (ha-240486-m02)   <features>
	I0828 17:14:30.263401   29200 main.go:141] libmachine: (ha-240486-m02)     <acpi/>
	I0828 17:14:30.263409   29200 main.go:141] libmachine: (ha-240486-m02)     <apic/>
	I0828 17:14:30.263435   29200 main.go:141] libmachine: (ha-240486-m02)     <pae/>
	I0828 17:14:30.263456   29200 main.go:141] libmachine: (ha-240486-m02)     
	I0828 17:14:30.263470   29200 main.go:141] libmachine: (ha-240486-m02)   </features>
	I0828 17:14:30.263482   29200 main.go:141] libmachine: (ha-240486-m02)   <cpu mode='host-passthrough'>
	I0828 17:14:30.263494   29200 main.go:141] libmachine: (ha-240486-m02)   
	I0828 17:14:30.263507   29200 main.go:141] libmachine: (ha-240486-m02)   </cpu>
	I0828 17:14:30.263517   29200 main.go:141] libmachine: (ha-240486-m02)   <os>
	I0828 17:14:30.263522   29200 main.go:141] libmachine: (ha-240486-m02)     <type>hvm</type>
	I0828 17:14:30.263528   29200 main.go:141] libmachine: (ha-240486-m02)     <boot dev='cdrom'/>
	I0828 17:14:30.263534   29200 main.go:141] libmachine: (ha-240486-m02)     <boot dev='hd'/>
	I0828 17:14:30.263541   29200 main.go:141] libmachine: (ha-240486-m02)     <bootmenu enable='no'/>
	I0828 17:14:30.263547   29200 main.go:141] libmachine: (ha-240486-m02)   </os>
	I0828 17:14:30.263552   29200 main.go:141] libmachine: (ha-240486-m02)   <devices>
	I0828 17:14:30.263560   29200 main.go:141] libmachine: (ha-240486-m02)     <disk type='file' device='cdrom'>
	I0828 17:14:30.263577   29200 main.go:141] libmachine: (ha-240486-m02)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/boot2docker.iso'/>
	I0828 17:14:30.263592   29200 main.go:141] libmachine: (ha-240486-m02)       <target dev='hdc' bus='scsi'/>
	I0828 17:14:30.263603   29200 main.go:141] libmachine: (ha-240486-m02)       <readonly/>
	I0828 17:14:30.263615   29200 main.go:141] libmachine: (ha-240486-m02)     </disk>
	I0828 17:14:30.263626   29200 main.go:141] libmachine: (ha-240486-m02)     <disk type='file' device='disk'>
	I0828 17:14:30.263634   29200 main.go:141] libmachine: (ha-240486-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 17:14:30.263642   29200 main.go:141] libmachine: (ha-240486-m02)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/ha-240486-m02.rawdisk'/>
	I0828 17:14:30.263653   29200 main.go:141] libmachine: (ha-240486-m02)       <target dev='hda' bus='virtio'/>
	I0828 17:14:30.263678   29200 main.go:141] libmachine: (ha-240486-m02)     </disk>
	I0828 17:14:30.263700   29200 main.go:141] libmachine: (ha-240486-m02)     <interface type='network'>
	I0828 17:14:30.263711   29200 main.go:141] libmachine: (ha-240486-m02)       <source network='mk-ha-240486'/>
	I0828 17:14:30.263727   29200 main.go:141] libmachine: (ha-240486-m02)       <model type='virtio'/>
	I0828 17:14:30.263740   29200 main.go:141] libmachine: (ha-240486-m02)     </interface>
	I0828 17:14:30.263751   29200 main.go:141] libmachine: (ha-240486-m02)     <interface type='network'>
	I0828 17:14:30.263761   29200 main.go:141] libmachine: (ha-240486-m02)       <source network='default'/>
	I0828 17:14:30.263771   29200 main.go:141] libmachine: (ha-240486-m02)       <model type='virtio'/>
	I0828 17:14:30.263783   29200 main.go:141] libmachine: (ha-240486-m02)     </interface>
	I0828 17:14:30.263791   29200 main.go:141] libmachine: (ha-240486-m02)     <serial type='pty'>
	I0828 17:14:30.263803   29200 main.go:141] libmachine: (ha-240486-m02)       <target port='0'/>
	I0828 17:14:30.263816   29200 main.go:141] libmachine: (ha-240486-m02)     </serial>
	I0828 17:14:30.263822   29200 main.go:141] libmachine: (ha-240486-m02)     <console type='pty'>
	I0828 17:14:30.263829   29200 main.go:141] libmachine: (ha-240486-m02)       <target type='serial' port='0'/>
	I0828 17:14:30.263835   29200 main.go:141] libmachine: (ha-240486-m02)     </console>
	I0828 17:14:30.263842   29200 main.go:141] libmachine: (ha-240486-m02)     <rng model='virtio'>
	I0828 17:14:30.263848   29200 main.go:141] libmachine: (ha-240486-m02)       <backend model='random'>/dev/random</backend>
	I0828 17:14:30.263854   29200 main.go:141] libmachine: (ha-240486-m02)     </rng>
	I0828 17:14:30.263860   29200 main.go:141] libmachine: (ha-240486-m02)     
	I0828 17:14:30.263866   29200 main.go:141] libmachine: (ha-240486-m02)     
	I0828 17:14:30.263872   29200 main.go:141] libmachine: (ha-240486-m02)   </devices>
	I0828 17:14:30.263887   29200 main.go:141] libmachine: (ha-240486-m02) </domain>
	I0828 17:14:30.263897   29200 main.go:141] libmachine: (ha-240486-m02) 
	I0828 17:14:30.270633   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:e3:56:d7 in network default
	I0828 17:14:30.271175   29200 main.go:141] libmachine: (ha-240486-m02) Ensuring networks are active...
	I0828 17:14:30.271197   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:30.271932   29200 main.go:141] libmachine: (ha-240486-m02) Ensuring network default is active
	I0828 17:14:30.272289   29200 main.go:141] libmachine: (ha-240486-m02) Ensuring network mk-ha-240486 is active
	I0828 17:14:30.272742   29200 main.go:141] libmachine: (ha-240486-m02) Getting domain xml...
	I0828 17:14:30.273403   29200 main.go:141] libmachine: (ha-240486-m02) Creating domain...
	I0828 17:14:31.496045   29200 main.go:141] libmachine: (ha-240486-m02) Waiting to get IP...
	I0828 17:14:31.496823   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:31.497228   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:31.497280   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:31.497238   29549 retry.go:31] will retry after 309.330553ms: waiting for machine to come up
	I0828 17:14:31.808741   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:31.809684   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:31.809716   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:31.809619   29549 retry.go:31] will retry after 389.919333ms: waiting for machine to come up
	I0828 17:14:32.201158   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:32.201509   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:32.201534   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:32.201463   29549 retry.go:31] will retry after 376.365916ms: waiting for machine to come up
	I0828 17:14:32.579039   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:32.579501   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:32.579529   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:32.579449   29549 retry.go:31] will retry after 501.696482ms: waiting for machine to come up
	I0828 17:14:33.083410   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:33.083919   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:33.083948   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:33.083848   29549 retry.go:31] will retry after 704.393424ms: waiting for machine to come up
	I0828 17:14:33.789221   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:33.789613   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:33.789640   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:33.789571   29549 retry.go:31] will retry after 921.016003ms: waiting for machine to come up
	I0828 17:14:34.712190   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:34.712613   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:34.712646   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:34.712569   29549 retry.go:31] will retry after 810.327503ms: waiting for machine to come up
	I0828 17:14:35.524860   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:35.525335   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:35.525372   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:35.525317   29549 retry.go:31] will retry after 1.133731078s: waiting for machine to come up
	I0828 17:14:36.660577   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:36.660936   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:36.660956   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:36.660914   29549 retry.go:31] will retry after 1.611562831s: waiting for machine to come up
	I0828 17:14:38.273523   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:38.273917   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:38.273946   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:38.273869   29549 retry.go:31] will retry after 1.957592324s: waiting for machine to come up
	I0828 17:14:40.233439   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:40.233821   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:40.233850   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:40.233764   29549 retry.go:31] will retry after 2.876473022s: waiting for machine to come up
	I0828 17:14:43.113682   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:43.114056   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:43.114095   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:43.114018   29549 retry.go:31] will retry after 3.170561273s: waiting for machine to come up
	I0828 17:14:46.286603   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:46.286998   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find current IP address of domain ha-240486-m02 in network mk-ha-240486
	I0828 17:14:46.287026   29200 main.go:141] libmachine: (ha-240486-m02) DBG | I0828 17:14:46.286944   29549 retry.go:31] will retry after 2.886461612s: waiting for machine to come up
	I0828 17:14:49.176848   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.177265   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has current primary IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.177289   29200 main.go:141] libmachine: (ha-240486-m02) Found IP for machine: 192.168.39.103
	I0828 17:14:49.177302   29200 main.go:141] libmachine: (ha-240486-m02) Reserving static IP address...
	I0828 17:14:49.177626   29200 main.go:141] libmachine: (ha-240486-m02) DBG | unable to find host DHCP lease matching {name: "ha-240486-m02", mac: "52:54:00:b3:68:04", ip: "192.168.39.103"} in network mk-ha-240486
	I0828 17:14:49.249548   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Getting to WaitForSSH function...
	I0828 17:14:49.249574   29200 main.go:141] libmachine: (ha-240486-m02) Reserved static IP address: 192.168.39.103
	I0828 17:14:49.249624   29200 main.go:141] libmachine: (ha-240486-m02) Waiting for SSH to be available...
	I0828 17:14:49.252243   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.252577   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.252599   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.252787   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Using SSH client type: external
	I0828 17:14:49.252813   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa (-rw-------)
	I0828 17:14:49.252842   29200 main.go:141] libmachine: (ha-240486-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.103 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 17:14:49.252856   29200 main.go:141] libmachine: (ha-240486-m02) DBG | About to run SSH command:
	I0828 17:14:49.252869   29200 main.go:141] libmachine: (ha-240486-m02) DBG | exit 0
	I0828 17:14:49.374056   29200 main.go:141] libmachine: (ha-240486-m02) DBG | SSH cmd err, output: <nil>: 
	I0828 17:14:49.374303   29200 main.go:141] libmachine: (ha-240486-m02) KVM machine creation complete!
	I0828 17:14:49.374645   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetConfigRaw
	I0828 17:14:49.375205   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:49.375408   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:49.375553   29200 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 17:14:49.375569   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:14:49.376919   29200 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 17:14:49.376932   29200 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 17:14:49.376938   29200 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 17:14:49.376944   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.379123   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.379507   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.379528   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.379716   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:49.379902   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.380068   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.380220   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:49.380366   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:49.380557   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:49.380578   29200 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 17:14:49.477473   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:14:49.477494   29200 main.go:141] libmachine: Detecting the provisioner...
	I0828 17:14:49.477502   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.480089   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.480492   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.480526   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.480654   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:49.480810   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.480981   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.481112   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:49.481252   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:49.481456   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:49.481468   29200 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 17:14:49.578647   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 17:14:49.578743   29200 main.go:141] libmachine: found compatible host: buildroot
	I0828 17:14:49.578758   29200 main.go:141] libmachine: Provisioning with buildroot...
	I0828 17:14:49.578765   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetMachineName
	I0828 17:14:49.579044   29200 buildroot.go:166] provisioning hostname "ha-240486-m02"
	I0828 17:14:49.579075   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetMachineName
	I0828 17:14:49.579259   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.582053   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.582427   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.582457   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.582642   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:49.582814   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.583003   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.583159   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:49.583329   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:49.583547   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:49.583565   29200 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-240486-m02 && echo "ha-240486-m02" | sudo tee /etc/hostname
	I0828 17:14:49.697767   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-240486-m02
	
	I0828 17:14:49.697794   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.700421   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.700827   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.700852   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.701086   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:49.701272   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.701445   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.701571   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:49.701724   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:49.701919   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:49.701937   29200 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-240486-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-240486-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-240486-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:14:49.806362   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:14:49.806390   29200 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 17:14:49.806427   29200 buildroot.go:174] setting up certificates
	I0828 17:14:49.806443   29200 provision.go:84] configureAuth start
	I0828 17:14:49.806463   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetMachineName
	I0828 17:14:49.806764   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:14:49.809479   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.809830   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.809855   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.809989   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.812004   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.812271   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.812299   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.812468   29200 provision.go:143] copyHostCerts
	I0828 17:14:49.812499   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:14:49.812535   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 17:14:49.812547   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:14:49.812625   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 17:14:49.812715   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:14:49.812740   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 17:14:49.812750   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:14:49.812785   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 17:14:49.812846   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:14:49.812870   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 17:14:49.812879   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:14:49.812913   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 17:14:49.812982   29200 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.ha-240486-m02 san=[127.0.0.1 192.168.39.103 ha-240486-m02 localhost minikube]
	I0828 17:14:49.888543   29200 provision.go:177] copyRemoteCerts
	I0828 17:14:49.888600   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:14:49.888627   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:49.891270   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.891563   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:49.891589   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:49.891757   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:49.891982   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:49.892131   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:49.892264   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	I0828 17:14:49.971726   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0828 17:14:49.971806   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0828 17:14:49.994849   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0828 17:14:49.994921   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 17:14:50.017522   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0828 17:14:50.017586   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 17:14:50.040308   29200 provision.go:87] duration metric: took 233.852237ms to configureAuth
	I0828 17:14:50.040355   29200 buildroot.go:189] setting minikube options for container-runtime
	I0828 17:14:50.040511   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:14:50.040580   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:50.043078   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.043411   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.043442   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.043617   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:50.043806   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.043961   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.044124   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:50.044252   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:50.044397   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:50.044411   29200 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 17:14:50.265971   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 17:14:50.266003   29200 main.go:141] libmachine: Checking connection to Docker...
	I0828 17:14:50.266013   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetURL
	I0828 17:14:50.267289   29200 main.go:141] libmachine: (ha-240486-m02) DBG | Using libvirt version 6000000
	I0828 17:14:50.269548   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.269866   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.269891   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.270040   29200 main.go:141] libmachine: Docker is up and running!
	I0828 17:14:50.270054   29200 main.go:141] libmachine: Reticulating splines...
	I0828 17:14:50.270061   29200 client.go:171] duration metric: took 20.398388754s to LocalClient.Create
	I0828 17:14:50.270102   29200 start.go:167] duration metric: took 20.398462834s to libmachine.API.Create "ha-240486"
	I0828 17:14:50.270115   29200 start.go:293] postStartSetup for "ha-240486-m02" (driver="kvm2")
	I0828 17:14:50.270128   29200 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:14:50.270151   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:50.270420   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:14:50.270440   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:50.272619   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.272961   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.272985   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.273124   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:50.273308   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.273457   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:50.273591   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	I0828 17:14:50.353365   29200 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:14:50.358483   29200 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 17:14:50.358512   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 17:14:50.358581   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 17:14:50.358650   29200 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 17:14:50.358663   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /etc/ssl/certs/175282.pem
	I0828 17:14:50.358745   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 17:14:50.368139   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:14:50.392845   29200 start.go:296] duration metric: took 122.714343ms for postStartSetup
	I0828 17:14:50.392906   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetConfigRaw
	I0828 17:14:50.393528   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:14:50.396383   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.396750   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.396763   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.397003   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:14:50.397241   29200 start.go:128] duration metric: took 20.543534853s to createHost
	I0828 17:14:50.397265   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:50.399877   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.400199   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.400219   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.400426   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:50.400627   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.400783   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.400895   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:50.401030   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:14:50.401234   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I0828 17:14:50.401246   29200 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 17:14:50.498646   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724865290.473915256
	
	I0828 17:14:50.498666   29200 fix.go:216] guest clock: 1724865290.473915256
	I0828 17:14:50.498674   29200 fix.go:229] Guest: 2024-08-28 17:14:50.473915256 +0000 UTC Remote: 2024-08-28 17:14:50.397255079 +0000 UTC m=+62.169751704 (delta=76.660177ms)
	I0828 17:14:50.498689   29200 fix.go:200] guest clock delta is within tolerance: 76.660177ms
	I0828 17:14:50.498694   29200 start.go:83] releasing machines lock for "ha-240486-m02", held for 20.645075428s
	I0828 17:14:50.498710   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:50.499024   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:14:50.501564   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.501988   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.502012   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.504433   29200 out.go:177] * Found network options:
	I0828 17:14:50.505883   29200 out.go:177]   - NO_PROXY=192.168.39.227
	W0828 17:14:50.507380   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	I0828 17:14:50.507416   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:50.508049   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:50.508257   29200 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:14:50.508363   29200 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:14:50.508401   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	W0828 17:14:50.508522   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	I0828 17:14:50.508613   29200 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 17:14:50.508649   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:14:50.511197   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.511474   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.511545   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.511574   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.511716   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:50.511881   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.511961   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:50.511992   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:50.512047   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:50.512148   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:14:50.512222   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	I0828 17:14:50.512325   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:14:50.512476   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:14:50.512636   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	I0828 17:14:50.743801   29200 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 17:14:50.749218   29200 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 17:14:50.749299   29200 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:14:50.765791   29200 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 17:14:50.765815   29200 start.go:495] detecting cgroup driver to use...
	I0828 17:14:50.765888   29200 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 17:14:50.782925   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 17:14:50.797403   29200 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:14:50.797462   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:14:50.812777   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:14:50.827620   29200 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:14:50.952895   29200 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:14:51.085964   29200 docker.go:233] disabling docker service ...
	I0828 17:14:51.086038   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:14:51.100646   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:14:51.114372   29200 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:14:51.258433   29200 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:14:51.378426   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:14:51.392132   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:14:51.412693   29200 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 17:14:51.412752   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.423135   29200 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 17:14:51.423185   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.433375   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.442857   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.452289   29200 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:14:51.462037   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.471401   29200 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.487553   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:14:51.497005   29200 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:14:51.505597   29200 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 17:14:51.505659   29200 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 17:14:51.516933   29200 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:14:51.526099   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:14:51.632890   29200 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 17:14:51.727935   29200 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 17:14:51.728018   29200 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 17:14:51.732611   29200 start.go:563] Will wait 60s for crictl version
	I0828 17:14:51.732669   29200 ssh_runner.go:195] Run: which crictl
	I0828 17:14:51.736097   29200 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:14:51.779358   29200 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 17:14:51.779446   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:14:51.809785   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:14:51.840021   29200 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 17:14:51.841344   29200 out.go:177]   - env NO_PROXY=192.168.39.227
	I0828 17:14:51.842489   29200 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:14:51.844988   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:51.845341   29200 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:43 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:14:51.845374   29200 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:14:51.845616   29200 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 17:14:51.849640   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:14:51.861969   29200 mustload.go:65] Loading cluster: ha-240486
	I0828 17:14:51.862200   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:14:51.862455   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:51.862497   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:51.877690   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46281
	I0828 17:14:51.878221   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:51.878718   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:51.878738   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:51.879035   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:51.879176   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:14:51.880797   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:14:51.881079   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:51.881111   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:51.896279   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0828 17:14:51.896673   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:51.897118   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:51.897139   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:51.897401   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:51.897562   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:51.897738   29200 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486 for IP: 192.168.39.103
	I0828 17:14:51.897748   29200 certs.go:194] generating shared ca certs ...
	I0828 17:14:51.897761   29200 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:51.897883   29200 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 17:14:51.897924   29200 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 17:14:51.897933   29200 certs.go:256] generating profile certs ...
	I0828 17:14:51.897995   29200 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key
	I0828 17:14:51.898021   29200 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.fbdfaf22
	I0828 17:14:51.898033   29200 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.fbdfaf22 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.103 192.168.39.254]
	I0828 17:14:52.005029   29200 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.fbdfaf22 ...
	I0828 17:14:52.005054   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.fbdfaf22: {Name:mk01885375cad3d22fa2b18a0913731209d0f7f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:52.005236   29200 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.fbdfaf22 ...
	I0828 17:14:52.005253   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.fbdfaf22: {Name:mk1cf0bdd411116af52d270493dcf45381853faa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:14:52.005348   29200 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.fbdfaf22 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt
	I0828 17:14:52.005474   29200 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.fbdfaf22 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key
	I0828 17:14:52.005592   29200 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key
	I0828 17:14:52.005606   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0828 17:14:52.005625   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0828 17:14:52.005637   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0828 17:14:52.005654   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0828 17:14:52.005666   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0828 17:14:52.005679   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0828 17:14:52.005689   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0828 17:14:52.005700   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0828 17:14:52.005742   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 17:14:52.005773   29200 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 17:14:52.005783   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:14:52.005802   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 17:14:52.005822   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:14:52.005843   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 17:14:52.005878   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:14:52.005907   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:52.005920   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem -> /usr/share/ca-certificates/17528.pem
	I0828 17:14:52.005932   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /usr/share/ca-certificates/175282.pem
	I0828 17:14:52.005962   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:52.008703   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:52.009075   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:52.009100   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:52.009248   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:52.009520   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:52.009666   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:52.009780   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:52.082432   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0828 17:14:52.087332   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0828 17:14:52.099798   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0828 17:14:52.104152   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0828 17:14:52.114144   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0828 17:14:52.117845   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0828 17:14:52.128105   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0828 17:14:52.132117   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0828 17:14:52.142213   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0828 17:14:52.145897   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0828 17:14:52.155902   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0828 17:14:52.159787   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0828 17:14:52.169474   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:14:52.192908   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 17:14:52.215638   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:14:52.238192   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:14:52.259513   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0828 17:14:52.280759   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 17:14:52.301862   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:14:52.323166   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 17:14:52.345050   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:14:52.366042   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 17:14:52.387082   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 17:14:52.408024   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0828 17:14:52.422663   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0828 17:14:52.438110   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0828 17:14:52.453761   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0828 17:14:52.468421   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0828 17:14:52.483087   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0828 17:14:52.497802   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0828 17:14:52.512440   29200 ssh_runner.go:195] Run: openssl version
	I0828 17:14:52.517816   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:14:52.527439   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:52.531473   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:52.531520   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:14:52.536779   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:14:52.546308   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 17:14:52.556450   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 17:14:52.560503   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:14:52.560553   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 17:14:52.565872   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 17:14:52.575912   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 17:14:52.585733   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 17:14:52.589882   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:14:52.589937   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 17:14:52.595134   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 17:14:52.605198   29200 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:14:52.608898   29200 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 17:14:52.608951   29200 kubeadm.go:934] updating node {m02 192.168.39.103 8443 v1.31.0 crio true true} ...
	I0828 17:14:52.609036   29200 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-240486-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:14:52.609068   29200 kube-vip.go:115] generating kube-vip config ...
	I0828 17:14:52.609101   29200 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0828 17:14:52.625208   29200 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0828 17:14:52.625278   29200 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0828 17:14:52.625334   29200 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 17:14:52.634543   29200 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0828 17:14:52.634606   29200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0828 17:14:52.643784   29200 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0828 17:14:52.643870   29200 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0828 17:14:52.643784   29200 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0828 17:14:52.643920   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0828 17:14:52.644010   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0828 17:14:52.648261   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0828 17:14:52.648287   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0828 17:14:53.549787   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0828 17:14:53.549866   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0828 17:14:53.554526   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0828 17:14:53.554565   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0828 17:14:53.765178   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:14:53.800403   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0828 17:14:53.800500   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0828 17:14:53.805267   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0828 17:14:53.805300   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0828 17:14:54.117733   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0828 17:14:54.126890   29200 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0828 17:14:54.144348   29200 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:14:54.161479   29200 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0828 17:14:54.178463   29200 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0828 17:14:54.182442   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:14:54.193912   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:14:54.317990   29200 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:14:54.335631   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:14:54.336129   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:14:54.336196   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:14:54.351508   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I0828 17:14:54.351940   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:14:54.352400   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:14:54.352425   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:14:54.352721   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:14:54.352908   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:14:54.353031   29200 start.go:317] joinCluster: &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:14:54.353140   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0828 17:14:54.353158   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:14:54.356321   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:54.356770   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:14:54.356809   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:14:54.357067   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:14:54.357278   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:14:54.357451   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:14:54.357615   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:14:54.501288   29200 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:14:54.501351   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t7dffj.lbnbcon9dz7sdvz7 --discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-240486-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443"
	I0828 17:15:16.425395   29200 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token t7dffj.lbnbcon9dz7sdvz7 --discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-240486-m02 --control-plane --apiserver-advertise-address=192.168.39.103 --apiserver-bind-port=8443": (21.924015345s)
	I0828 17:15:16.425446   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0828 17:15:16.983058   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-240486-m02 minikube.k8s.io/updated_at=2024_08_28T17_15_16_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=ha-240486 minikube.k8s.io/primary=false
	I0828 17:15:17.092321   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-240486-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0828 17:15:17.195990   29200 start.go:319] duration metric: took 22.842954145s to joinCluster
	I0828 17:15:17.196065   29200 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:15:17.196355   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:15:17.198043   29200 out.go:177] * Verifying Kubernetes components...
	I0828 17:15:17.199594   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:15:17.486580   29200 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:15:17.512850   29200 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:15:17.513151   29200 kapi.go:59] client config for ha-240486: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt", KeyFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key", CAFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0828 17:15:17.513218   29200 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.227:8443
	I0828 17:15:17.513492   29200 node_ready.go:35] waiting up to 6m0s for node "ha-240486-m02" to be "Ready" ...
	I0828 17:15:17.513599   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:17.513612   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:17.513625   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:17.513630   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:17.521769   29200 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0828 17:15:18.013712   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:18.013746   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:18.013757   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:18.013763   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:18.019484   29200 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0828 17:15:18.514511   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:18.514532   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:18.514541   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:18.514545   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:18.518170   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:19.014635   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:19.014659   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:19.014670   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:19.014677   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:19.018729   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:19.514200   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:19.514223   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:19.514232   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:19.514236   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:19.517526   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:19.518126   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:20.013686   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:20.013711   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:20.013722   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:20.013728   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:20.016710   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:20.513675   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:20.513696   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:20.513708   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:20.513712   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:20.517212   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:21.014172   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:21.014195   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:21.014206   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:21.014210   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:21.018697   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:21.514475   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:21.514512   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:21.514522   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:21.514526   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:21.517844   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:21.518569   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:22.013738   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:22.013758   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:22.013767   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:22.013773   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:22.017406   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:22.514535   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:22.514557   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:22.514569   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:22.514577   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:22.518450   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:23.014476   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:23.014496   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:23.014504   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:23.014508   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:23.017523   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:23.514148   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:23.514167   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:23.514176   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:23.514180   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:23.517401   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:24.014492   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:24.014523   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:24.014535   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:24.014542   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:24.018750   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:24.019202   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:24.513722   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:24.513743   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:24.513751   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:24.513755   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:24.517023   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:25.014439   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:25.014465   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:25.014477   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:25.014482   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:25.018254   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:25.514350   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:25.514387   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:25.514399   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:25.514404   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:25.517422   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:26.014394   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:26.014415   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:26.014424   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:26.014429   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:26.017966   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:26.513970   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:26.513991   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:26.514000   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:26.514004   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:26.517078   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:26.517601   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:27.013898   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:27.013924   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:27.013934   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:27.013940   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:27.017053   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:27.514328   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:27.514356   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:27.514366   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:27.514370   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:27.517810   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:28.013713   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:28.013744   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:28.013751   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:28.013754   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:28.017200   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:28.513829   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:28.513855   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:28.513864   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:28.513870   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:28.516923   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:29.014101   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:29.014122   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:29.014135   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:29.014142   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:29.018063   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:29.018503   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:29.514406   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:29.514427   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:29.514435   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:29.514439   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:29.517872   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:30.014067   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:30.014117   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:30.014128   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:30.014134   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:30.017275   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:30.514359   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:30.514380   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:30.514388   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:30.514392   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:30.517774   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:31.014691   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:31.014717   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:31.014727   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:31.014732   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:31.018008   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:31.018707   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:31.514127   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:31.514153   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:31.514161   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:31.514165   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:31.517215   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:32.014130   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:32.014151   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:32.014160   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:32.014163   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:32.017006   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:32.513774   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:32.513798   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:32.513808   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:32.513812   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:32.516896   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:33.013811   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:33.013831   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:33.013841   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:33.013847   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:33.017335   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:33.513769   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:33.513793   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:33.513802   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:33.513808   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:33.517313   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:33.518322   29200 node_ready.go:53] node "ha-240486-m02" has status "Ready":"False"
	I0828 17:15:34.014654   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:34.014681   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:34.014692   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:34.014697   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:34.017648   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:34.513955   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:34.513978   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:34.513986   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:34.513990   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:34.517349   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:35.013994   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:35.014015   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.014023   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.014029   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.017889   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:35.513780   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:35.513809   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.513820   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.513826   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.517176   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:35.517743   29200 node_ready.go:49] node "ha-240486-m02" has status "Ready":"True"
	I0828 17:15:35.517764   29200 node_ready.go:38] duration metric: took 18.004247806s for node "ha-240486-m02" to be "Ready" ...
	I0828 17:15:35.517776   29200 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:15:35.517861   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:15:35.517874   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.517884   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.517892   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.522041   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:35.528302   29200 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.528407   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wtzml
	I0828 17:15:35.528419   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.528429   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.528438   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.531301   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.531817   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:35.531832   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.531842   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.531845   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.534272   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.534770   29200 pod_ready.go:93] pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:35.534787   29200 pod_ready.go:82] duration metric: took 6.459017ms for pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.534798   29200 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.534855   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-x562s
	I0828 17:15:35.534865   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.534875   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.534881   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.537216   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.537796   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:35.537810   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.537819   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.537824   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.539925   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.540396   29200 pod_ready.go:93] pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:35.540411   29200 pod_ready.go:82] duration metric: took 5.606327ms for pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.540423   29200 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.540474   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486
	I0828 17:15:35.540484   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.540493   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.540499   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.542473   29200 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0828 17:15:35.543096   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:35.543110   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.543120   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.543126   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.545555   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.546008   29200 pod_ready.go:93] pod "etcd-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:35.546027   29200 pod_ready.go:82] duration metric: took 5.597148ms for pod "etcd-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.546040   29200 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.546124   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486-m02
	I0828 17:15:35.546134   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.546146   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.546153   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.548354   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.548765   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:35.548777   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.548786   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.548793   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.550863   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.551191   29200 pod_ready.go:93] pod "etcd-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:35.551208   29200 pod_ready.go:82] duration metric: took 5.159072ms for pod "etcd-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.551227   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.714632   29200 request.go:632] Waited for 163.332307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486
	I0828 17:15:35.714691   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486
	I0828 17:15:35.714696   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.714704   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.714709   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.717592   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:35.914761   29200 request.go:632] Waited for 196.371747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:35.914830   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:35.914836   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:35.914843   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:35.914848   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:35.918114   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:35.918688   29200 pod_ready.go:93] pod "kube-apiserver-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:35.918715   29200 pod_ready.go:82] duration metric: took 367.477955ms for pod "kube-apiserver-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:35.918726   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:36.114627   29200 request.go:632] Waited for 195.832233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m02
	I0828 17:15:36.114705   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m02
	I0828 17:15:36.114714   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:36.114723   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:36.114731   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:36.118296   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:36.314232   29200 request.go:632] Waited for 195.315551ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:36.314331   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:36.314343   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:36.314354   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:36.314362   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:36.317346   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:36.317892   29200 pod_ready.go:93] pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:36.317911   29200 pod_ready.go:82] duration metric: took 399.178304ms for pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:36.317920   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:36.513909   29200 request.go:632] Waited for 195.926987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486
	I0828 17:15:36.513997   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486
	I0828 17:15:36.514005   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:36.514014   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:36.514019   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:36.517299   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:36.714390   29200 request.go:632] Waited for 196.373231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:36.714449   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:36.714454   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:36.714461   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:36.714467   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:36.717562   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:36.718206   29200 pod_ready.go:93] pod "kube-controller-manager-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:36.718226   29200 pod_ready.go:82] duration metric: took 400.299823ms for pod "kube-controller-manager-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:36.718237   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:36.914216   29200 request.go:632] Waited for 195.906561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m02
	I0828 17:15:36.914311   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m02
	I0828 17:15:36.914318   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:36.914327   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:36.914332   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:36.917884   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:37.113954   29200 request.go:632] Waited for 195.316279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:37.114023   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:37.114029   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:37.114037   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:37.114046   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:37.117354   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:37.117848   29200 pod_ready.go:93] pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:37.117872   29200 pod_ready.go:82] duration metric: took 399.623871ms for pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:37.117883   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4w7tt" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:37.313868   29200 request.go:632] Waited for 195.919913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4w7tt
	I0828 17:15:37.313937   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4w7tt
	I0828 17:15:37.313944   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:37.313952   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:37.313956   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:37.317638   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:37.514816   29200 request.go:632] Waited for 196.395024ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:37.514869   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:37.514874   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:37.514882   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:37.514886   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:37.517803   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:15:37.518391   29200 pod_ready.go:93] pod "kube-proxy-4w7tt" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:37.518411   29200 pod_ready.go:82] duration metric: took 400.517615ms for pod "kube-proxy-4w7tt" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:37.518423   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jdnzs" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:37.714550   29200 request.go:632] Waited for 196.06408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jdnzs
	I0828 17:15:37.714626   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jdnzs
	I0828 17:15:37.714639   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:37.714649   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:37.714661   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:37.717959   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:37.913905   29200 request.go:632] Waited for 195.331101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:37.914060   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:37.914091   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:37.914104   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:37.914115   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:37.917242   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:37.917835   29200 pod_ready.go:93] pod "kube-proxy-jdnzs" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:37.917856   29200 pod_ready.go:82] duration metric: took 399.42415ms for pod "kube-proxy-jdnzs" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:37.917869   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:38.114814   29200 request.go:632] Waited for 196.863834ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486
	I0828 17:15:38.114870   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486
	I0828 17:15:38.114875   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.114884   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.114887   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.118355   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:38.314373   29200 request.go:632] Waited for 195.36618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:38.314442   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:15:38.314449   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.314458   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.314465   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.317738   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:38.318357   29200 pod_ready.go:93] pod "kube-scheduler-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:38.318378   29200 pod_ready.go:82] duration metric: took 400.500122ms for pod "kube-scheduler-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:38.318393   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:38.513886   29200 request.go:632] Waited for 195.419271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m02
	I0828 17:15:38.513976   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m02
	I0828 17:15:38.513987   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.513999   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.514007   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.517316   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:38.714274   29200 request.go:632] Waited for 196.387122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:38.714331   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:15:38.714336   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.714370   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.714380   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.717742   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:38.718408   29200 pod_ready.go:93] pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:15:38.718428   29200 pod_ready.go:82] duration metric: took 400.024956ms for pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:15:38.718439   29200 pod_ready.go:39] duration metric: took 3.200648757s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:15:38.718454   29200 api_server.go:52] waiting for apiserver process to appear ...
	I0828 17:15:38.718502   29200 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:15:38.732926   29200 api_server.go:72] duration metric: took 21.536827363s to wait for apiserver process to appear ...
	I0828 17:15:38.732949   29200 api_server.go:88] waiting for apiserver healthz status ...
	I0828 17:15:38.732966   29200 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0828 17:15:38.737997   29200 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0828 17:15:38.738055   29200 round_trippers.go:463] GET https://192.168.39.227:8443/version
	I0828 17:15:38.738060   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.738068   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.738071   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.739313   29200 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0828 17:15:38.739410   29200 api_server.go:141] control plane version: v1.31.0
	I0828 17:15:38.739426   29200 api_server.go:131] duration metric: took 6.471345ms to wait for apiserver health ...
	I0828 17:15:38.739434   29200 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 17:15:38.914808   29200 request.go:632] Waited for 175.291341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:15:38.914885   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:15:38.914893   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:38.914904   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:38.914916   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:38.919370   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:38.923886   29200 system_pods.go:59] 17 kube-system pods found
	I0828 17:15:38.923914   29200 system_pods.go:61] "coredns-6f6b679f8f-wtzml" [424f87f7-0221-432d-a04f-8f276386be98] Running
	I0828 17:15:38.923922   29200 system_pods.go:61] "coredns-6f6b679f8f-x562s" [78fab040-ae1a-425e-9dc5-e10594b84b9f] Running
	I0828 17:15:38.923926   29200 system_pods.go:61] "etcd-ha-240486" [8a6cf9e2-f806-44ae-b6ef-2a522dc2f516] Running
	I0828 17:15:38.923929   29200 system_pods.go:61] "etcd-ha-240486-m02" [2053f850-310f-46b3-b3d0-a2dbcf97dd70] Running
	I0828 17:15:38.923932   29200 system_pods.go:61] "kindnet-pb8m7" [67180991-ca3a-4cfb-ba43-919c64d68657] Running
	I0828 17:15:38.923936   29200 system_pods.go:61] "kindnet-q9q9q" [2915b192-297e-4d73-802a-37660942c8c1] Running
	I0828 17:15:38.923940   29200 system_pods.go:61] "kube-apiserver-ha-240486" [e2c0b6cc-87e7-4ae4-823f-c51b100d056d] Running
	I0828 17:15:38.923943   29200 system_pods.go:61] "kube-apiserver-ha-240486-m02" [ead49a23-e0f0-4f8f-b327-6cd1d648ff65] Running
	I0828 17:15:38.923951   29200 system_pods.go:61] "kube-controller-manager-ha-240486" [1b0f6cba-56b3-4e54-b3fc-d5dba431f647] Running
	I0828 17:15:38.923955   29200 system_pods.go:61] "kube-controller-manager-ha-240486-m02" [20c49f1a-4f3d-4ed1-bca3-7efa53c61e4e] Running
	I0828 17:15:38.923958   29200 system_pods.go:61] "kube-proxy-4w7tt" [5188f77d-e0ea-4e42-a5c4-173a8d7680dd] Running
	I0828 17:15:38.923962   29200 system_pods.go:61] "kube-proxy-jdnzs" [9c500e4d-bea4-4389-aca7-ebf805f2e642] Running
	I0828 17:15:38.923966   29200 system_pods.go:61] "kube-scheduler-ha-240486" [ca5398d3-c263-4a18-9f9e-554bf50bf7d4] Running
	I0828 17:15:38.923970   29200 system_pods.go:61] "kube-scheduler-ha-240486-m02" [030ee5b8-449b-48ed-aaf4-ff4afeb8cae2] Running
	I0828 17:15:38.923975   29200 system_pods.go:61] "kube-vip-ha-240486" [f1caf9b0-cb2f-462f-be58-ee158739bb79] Running
	I0828 17:15:38.923982   29200 system_pods.go:61] "kube-vip-ha-240486-m02" [909bf826-9c16-458a-8721-9e9ddc2eda22] Running
	I0828 17:15:38.923987   29200 system_pods.go:61] "storage-provisioner" [83a920cf-9505-4ae6-bd10-2582b38ee29b] Running
	I0828 17:15:38.923997   29200 system_pods.go:74] duration metric: took 184.5575ms to wait for pod list to return data ...
	I0828 17:15:38.924007   29200 default_sa.go:34] waiting for default service account to be created ...
	I0828 17:15:39.114465   29200 request.go:632] Waited for 190.380314ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0828 17:15:39.114518   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0828 17:15:39.114523   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:39.114530   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:39.114533   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:39.118624   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:39.118837   29200 default_sa.go:45] found service account: "default"
	I0828 17:15:39.118852   29200 default_sa.go:55] duration metric: took 194.838823ms for default service account to be created ...
	I0828 17:15:39.118860   29200 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 17:15:39.314371   29200 request.go:632] Waited for 195.426211ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:15:39.314426   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:15:39.314431   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:39.314439   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:39.314443   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:39.319280   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:15:39.323575   29200 system_pods.go:86] 17 kube-system pods found
	I0828 17:15:39.323613   29200 system_pods.go:89] "coredns-6f6b679f8f-wtzml" [424f87f7-0221-432d-a04f-8f276386be98] Running
	I0828 17:15:39.323621   29200 system_pods.go:89] "coredns-6f6b679f8f-x562s" [78fab040-ae1a-425e-9dc5-e10594b84b9f] Running
	I0828 17:15:39.323629   29200 system_pods.go:89] "etcd-ha-240486" [8a6cf9e2-f806-44ae-b6ef-2a522dc2f516] Running
	I0828 17:15:39.323636   29200 system_pods.go:89] "etcd-ha-240486-m02" [2053f850-310f-46b3-b3d0-a2dbcf97dd70] Running
	I0828 17:15:39.323642   29200 system_pods.go:89] "kindnet-pb8m7" [67180991-ca3a-4cfb-ba43-919c64d68657] Running
	I0828 17:15:39.323649   29200 system_pods.go:89] "kindnet-q9q9q" [2915b192-297e-4d73-802a-37660942c8c1] Running
	I0828 17:15:39.323656   29200 system_pods.go:89] "kube-apiserver-ha-240486" [e2c0b6cc-87e7-4ae4-823f-c51b100d056d] Running
	I0828 17:15:39.323664   29200 system_pods.go:89] "kube-apiserver-ha-240486-m02" [ead49a23-e0f0-4f8f-b327-6cd1d648ff65] Running
	I0828 17:15:39.323676   29200 system_pods.go:89] "kube-controller-manager-ha-240486" [1b0f6cba-56b3-4e54-b3fc-d5dba431f647] Running
	I0828 17:15:39.323681   29200 system_pods.go:89] "kube-controller-manager-ha-240486-m02" [20c49f1a-4f3d-4ed1-bca3-7efa53c61e4e] Running
	I0828 17:15:39.323689   29200 system_pods.go:89] "kube-proxy-4w7tt" [5188f77d-e0ea-4e42-a5c4-173a8d7680dd] Running
	I0828 17:15:39.323694   29200 system_pods.go:89] "kube-proxy-jdnzs" [9c500e4d-bea4-4389-aca7-ebf805f2e642] Running
	I0828 17:15:39.323700   29200 system_pods.go:89] "kube-scheduler-ha-240486" [ca5398d3-c263-4a18-9f9e-554bf50bf7d4] Running
	I0828 17:15:39.323704   29200 system_pods.go:89] "kube-scheduler-ha-240486-m02" [030ee5b8-449b-48ed-aaf4-ff4afeb8cae2] Running
	I0828 17:15:39.323712   29200 system_pods.go:89] "kube-vip-ha-240486" [f1caf9b0-cb2f-462f-be58-ee158739bb79] Running
	I0828 17:15:39.323715   29200 system_pods.go:89] "kube-vip-ha-240486-m02" [909bf826-9c16-458a-8721-9e9ddc2eda22] Running
	I0828 17:15:39.323722   29200 system_pods.go:89] "storage-provisioner" [83a920cf-9505-4ae6-bd10-2582b38ee29b] Running
	I0828 17:15:39.323732   29200 system_pods.go:126] duration metric: took 204.865856ms to wait for k8s-apps to be running ...
	I0828 17:15:39.323744   29200 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 17:15:39.323790   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:15:39.338729   29200 system_svc.go:56] duration metric: took 14.979047ms WaitForService to wait for kubelet
	I0828 17:15:39.338759   29200 kubeadm.go:582] duration metric: took 22.14266206s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:15:39.338784   29200 node_conditions.go:102] verifying NodePressure condition ...
	I0828 17:15:39.514507   29200 request.go:632] Waited for 175.626696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes
	I0828 17:15:39.514569   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes
	I0828 17:15:39.514578   29200 round_trippers.go:469] Request Headers:
	I0828 17:15:39.514590   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:15:39.514600   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:15:39.518204   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:15:39.519150   29200 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 17:15:39.519181   29200 node_conditions.go:123] node cpu capacity is 2
	I0828 17:15:39.519196   29200 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 17:15:39.519202   29200 node_conditions.go:123] node cpu capacity is 2
	I0828 17:15:39.519211   29200 node_conditions.go:105] duration metric: took 180.421268ms to run NodePressure ...
	I0828 17:15:39.519228   29200 start.go:241] waiting for startup goroutines ...
	I0828 17:15:39.519259   29200 start.go:255] writing updated cluster config ...
	I0828 17:15:39.521387   29200 out.go:201] 
	I0828 17:15:39.522752   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:15:39.522874   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:15:39.524455   29200 out.go:177] * Starting "ha-240486-m03" control-plane node in "ha-240486" cluster
	I0828 17:15:39.525471   29200 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:15:39.525487   29200 cache.go:56] Caching tarball of preloaded images
	I0828 17:15:39.525565   29200 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 17:15:39.525575   29200 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 17:15:39.525652   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:15:39.525805   29200 start.go:360] acquireMachinesLock for ha-240486-m03: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 17:15:39.525843   29200 start.go:364] duration metric: took 20.835µs to acquireMachinesLock for "ha-240486-m03"
	I0828 17:15:39.525860   29200 start.go:93] Provisioning new machine with config: &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:15:39.525943   29200 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0828 17:15:39.527450   29200 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 17:15:39.527538   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:15:39.527571   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:15:39.542314   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0828 17:15:39.542721   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:15:39.543151   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:15:39.543171   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:15:39.543458   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:15:39.543607   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetMachineName
	I0828 17:15:39.543779   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:15:39.543915   29200 start.go:159] libmachine.API.Create for "ha-240486" (driver="kvm2")
	I0828 17:15:39.543937   29200 client.go:168] LocalClient.Create starting
	I0828 17:15:39.543965   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 17:15:39.543996   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:15:39.544010   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:15:39.544056   29200 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 17:15:39.544074   29200 main.go:141] libmachine: Decoding PEM data...
	I0828 17:15:39.544092   29200 main.go:141] libmachine: Parsing certificate...
	I0828 17:15:39.544107   29200 main.go:141] libmachine: Running pre-create checks...
	I0828 17:15:39.544115   29200 main.go:141] libmachine: (ha-240486-m03) Calling .PreCreateCheck
	I0828 17:15:39.544273   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetConfigRaw
	I0828 17:15:39.544646   29200 main.go:141] libmachine: Creating machine...
	I0828 17:15:39.544660   29200 main.go:141] libmachine: (ha-240486-m03) Calling .Create
	I0828 17:15:39.544798   29200 main.go:141] libmachine: (ha-240486-m03) Creating KVM machine...
	I0828 17:15:39.545885   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found existing default KVM network
	I0828 17:15:39.546000   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found existing private KVM network mk-ha-240486
	I0828 17:15:39.546135   29200 main.go:141] libmachine: (ha-240486-m03) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03 ...
	I0828 17:15:39.546179   29200 main.go:141] libmachine: (ha-240486-m03) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 17:15:39.546331   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:39.546127   29930 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:15:39.546383   29200 main.go:141] libmachine: (ha-240486-m03) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 17:15:39.769872   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:39.769729   29930 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa...
	I0828 17:15:39.921729   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:39.921586   29930 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/ha-240486-m03.rawdisk...
	I0828 17:15:39.921767   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Writing magic tar header
	I0828 17:15:39.921781   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Writing SSH key tar header
	I0828 17:15:39.921792   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:39.921737   29930 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03 ...
	I0828 17:15:39.921931   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03 (perms=drwx------)
	I0828 17:15:39.921960   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 17:15:39.921974   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03
	I0828 17:15:39.921992   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 17:15:39.922001   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:15:39.922011   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 17:15:39.922019   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 17:15:39.922025   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home/jenkins
	I0828 17:15:39.922031   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Checking permissions on dir: /home
	I0828 17:15:39.922061   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Skipping /home - not owner
	I0828 17:15:39.922110   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 17:15:39.922131   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 17:15:39.922146   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 17:15:39.922163   29200 main.go:141] libmachine: (ha-240486-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 17:15:39.922174   29200 main.go:141] libmachine: (ha-240486-m03) Creating domain...
	I0828 17:15:39.923082   29200 main.go:141] libmachine: (ha-240486-m03) define libvirt domain using xml: 
	I0828 17:15:39.923104   29200 main.go:141] libmachine: (ha-240486-m03) <domain type='kvm'>
	I0828 17:15:39.923115   29200 main.go:141] libmachine: (ha-240486-m03)   <name>ha-240486-m03</name>
	I0828 17:15:39.923127   29200 main.go:141] libmachine: (ha-240486-m03)   <memory unit='MiB'>2200</memory>
	I0828 17:15:39.923139   29200 main.go:141] libmachine: (ha-240486-m03)   <vcpu>2</vcpu>
	I0828 17:15:39.923147   29200 main.go:141] libmachine: (ha-240486-m03)   <features>
	I0828 17:15:39.923178   29200 main.go:141] libmachine: (ha-240486-m03)     <acpi/>
	I0828 17:15:39.923203   29200 main.go:141] libmachine: (ha-240486-m03)     <apic/>
	I0828 17:15:39.923215   29200 main.go:141] libmachine: (ha-240486-m03)     <pae/>
	I0828 17:15:39.923226   29200 main.go:141] libmachine: (ha-240486-m03)     
	I0828 17:15:39.923235   29200 main.go:141] libmachine: (ha-240486-m03)   </features>
	I0828 17:15:39.923245   29200 main.go:141] libmachine: (ha-240486-m03)   <cpu mode='host-passthrough'>
	I0828 17:15:39.923254   29200 main.go:141] libmachine: (ha-240486-m03)   
	I0828 17:15:39.923263   29200 main.go:141] libmachine: (ha-240486-m03)   </cpu>
	I0828 17:15:39.923274   29200 main.go:141] libmachine: (ha-240486-m03)   <os>
	I0828 17:15:39.923284   29200 main.go:141] libmachine: (ha-240486-m03)     <type>hvm</type>
	I0828 17:15:39.923292   29200 main.go:141] libmachine: (ha-240486-m03)     <boot dev='cdrom'/>
	I0828 17:15:39.923302   29200 main.go:141] libmachine: (ha-240486-m03)     <boot dev='hd'/>
	I0828 17:15:39.923311   29200 main.go:141] libmachine: (ha-240486-m03)     <bootmenu enable='no'/>
	I0828 17:15:39.923319   29200 main.go:141] libmachine: (ha-240486-m03)   </os>
	I0828 17:15:39.923330   29200 main.go:141] libmachine: (ha-240486-m03)   <devices>
	I0828 17:15:39.923341   29200 main.go:141] libmachine: (ha-240486-m03)     <disk type='file' device='cdrom'>
	I0828 17:15:39.923357   29200 main.go:141] libmachine: (ha-240486-m03)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/boot2docker.iso'/>
	I0828 17:15:39.923371   29200 main.go:141] libmachine: (ha-240486-m03)       <target dev='hdc' bus='scsi'/>
	I0828 17:15:39.923403   29200 main.go:141] libmachine: (ha-240486-m03)       <readonly/>
	I0828 17:15:39.923425   29200 main.go:141] libmachine: (ha-240486-m03)     </disk>
	I0828 17:15:39.923441   29200 main.go:141] libmachine: (ha-240486-m03)     <disk type='file' device='disk'>
	I0828 17:15:39.923455   29200 main.go:141] libmachine: (ha-240486-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 17:15:39.923473   29200 main.go:141] libmachine: (ha-240486-m03)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/ha-240486-m03.rawdisk'/>
	I0828 17:15:39.923483   29200 main.go:141] libmachine: (ha-240486-m03)       <target dev='hda' bus='virtio'/>
	I0828 17:15:39.923494   29200 main.go:141] libmachine: (ha-240486-m03)     </disk>
	I0828 17:15:39.923506   29200 main.go:141] libmachine: (ha-240486-m03)     <interface type='network'>
	I0828 17:15:39.923534   29200 main.go:141] libmachine: (ha-240486-m03)       <source network='mk-ha-240486'/>
	I0828 17:15:39.923554   29200 main.go:141] libmachine: (ha-240486-m03)       <model type='virtio'/>
	I0828 17:15:39.923565   29200 main.go:141] libmachine: (ha-240486-m03)     </interface>
	I0828 17:15:39.923576   29200 main.go:141] libmachine: (ha-240486-m03)     <interface type='network'>
	I0828 17:15:39.923590   29200 main.go:141] libmachine: (ha-240486-m03)       <source network='default'/>
	I0828 17:15:39.923601   29200 main.go:141] libmachine: (ha-240486-m03)       <model type='virtio'/>
	I0828 17:15:39.923611   29200 main.go:141] libmachine: (ha-240486-m03)     </interface>
	I0828 17:15:39.923621   29200 main.go:141] libmachine: (ha-240486-m03)     <serial type='pty'>
	I0828 17:15:39.923631   29200 main.go:141] libmachine: (ha-240486-m03)       <target port='0'/>
	I0828 17:15:39.923645   29200 main.go:141] libmachine: (ha-240486-m03)     </serial>
	I0828 17:15:39.923679   29200 main.go:141] libmachine: (ha-240486-m03)     <console type='pty'>
	I0828 17:15:39.923698   29200 main.go:141] libmachine: (ha-240486-m03)       <target type='serial' port='0'/>
	I0828 17:15:39.923711   29200 main.go:141] libmachine: (ha-240486-m03)     </console>
	I0828 17:15:39.923725   29200 main.go:141] libmachine: (ha-240486-m03)     <rng model='virtio'>
	I0828 17:15:39.923734   29200 main.go:141] libmachine: (ha-240486-m03)       <backend model='random'>/dev/random</backend>
	I0828 17:15:39.923740   29200 main.go:141] libmachine: (ha-240486-m03)     </rng>
	I0828 17:15:39.923746   29200 main.go:141] libmachine: (ha-240486-m03)     
	I0828 17:15:39.923752   29200 main.go:141] libmachine: (ha-240486-m03)     
	I0828 17:15:39.923770   29200 main.go:141] libmachine: (ha-240486-m03)   </devices>
	I0828 17:15:39.923789   29200 main.go:141] libmachine: (ha-240486-m03) </domain>
	I0828 17:15:39.923799   29200 main.go:141] libmachine: (ha-240486-m03) 
	I0828 17:15:39.930273   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:e8:20:89 in network default
	I0828 17:15:39.930747   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:39.930764   29200 main.go:141] libmachine: (ha-240486-m03) Ensuring networks are active...
	I0828 17:15:39.931428   29200 main.go:141] libmachine: (ha-240486-m03) Ensuring network default is active
	I0828 17:15:39.931658   29200 main.go:141] libmachine: (ha-240486-m03) Ensuring network mk-ha-240486 is active
	I0828 17:15:39.932000   29200 main.go:141] libmachine: (ha-240486-m03) Getting domain xml...
	I0828 17:15:39.932671   29200 main.go:141] libmachine: (ha-240486-m03) Creating domain...
	I0828 17:15:41.172014   29200 main.go:141] libmachine: (ha-240486-m03) Waiting to get IP...
	I0828 17:15:41.172734   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:41.173147   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:41.173196   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:41.173152   29930 retry.go:31] will retry after 227.598083ms: waiting for machine to come up
	I0828 17:15:41.402806   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:41.403278   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:41.403306   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:41.403240   29930 retry.go:31] will retry after 249.890746ms: waiting for machine to come up
	I0828 17:15:41.656028   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:41.656449   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:41.656467   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:41.656412   29930 retry.go:31] will retry after 456.580621ms: waiting for machine to come up
	I0828 17:15:42.114765   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:42.115241   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:42.115274   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:42.115192   29930 retry.go:31] will retry after 420.923136ms: waiting for machine to come up
	I0828 17:15:42.537966   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:42.538404   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:42.538460   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:42.538356   29930 retry.go:31] will retry after 728.870515ms: waiting for machine to come up
	I0828 17:15:43.268293   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:43.268676   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:43.268704   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:43.268630   29930 retry.go:31] will retry after 802.680619ms: waiting for machine to come up
	I0828 17:15:44.072482   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:44.072962   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:44.072991   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:44.072907   29930 retry.go:31] will retry after 1.076312326s: waiting for machine to come up
	I0828 17:15:45.150919   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:45.151447   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:45.151478   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:45.151406   29930 retry.go:31] will retry after 1.105111399s: waiting for machine to come up
	I0828 17:15:46.258745   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:46.259186   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:46.259210   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:46.259153   29930 retry.go:31] will retry after 1.521636059s: waiting for machine to come up
	I0828 17:15:47.782743   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:47.783150   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:47.783175   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:47.783106   29930 retry.go:31] will retry after 2.061034215s: waiting for machine to come up
	I0828 17:15:49.846879   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:49.847359   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:49.847398   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:49.847316   29930 retry.go:31] will retry after 2.417689828s: waiting for machine to come up
	I0828 17:15:52.267103   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:52.267504   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:52.267529   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:52.267452   29930 retry.go:31] will retry after 2.531691934s: waiting for machine to come up
	I0828 17:15:54.800110   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:54.800491   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:54.800518   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:54.800451   29930 retry.go:31] will retry after 3.301665009s: waiting for machine to come up
	I0828 17:15:58.103319   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:15:58.103797   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find current IP address of domain ha-240486-m03 in network mk-ha-240486
	I0828 17:15:58.103827   29200 main.go:141] libmachine: (ha-240486-m03) DBG | I0828 17:15:58.103739   29930 retry.go:31] will retry after 4.773578468s: waiting for machine to come up
	I0828 17:16:02.881367   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:02.881716   29200 main.go:141] libmachine: (ha-240486-m03) Found IP for machine: 192.168.39.28
	I0828 17:16:02.881742   29200 main.go:141] libmachine: (ha-240486-m03) Reserving static IP address...
	I0828 17:16:02.881759   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has current primary IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:02.882039   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find host DHCP lease matching {name: "ha-240486-m03", mac: "52:54:00:2e:b2:44", ip: "192.168.39.28"} in network mk-ha-240486
	I0828 17:16:02.954847   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Getting to WaitForSSH function...
	I0828 17:16:02.954879   29200 main.go:141] libmachine: (ha-240486-m03) Reserved static IP address: 192.168.39.28
	I0828 17:16:02.954892   29200 main.go:141] libmachine: (ha-240486-m03) Waiting for SSH to be available...
	I0828 17:16:02.957270   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:02.957635   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486
	I0828 17:16:02.957663   29200 main.go:141] libmachine: (ha-240486-m03) DBG | unable to find defined IP address of network mk-ha-240486 interface with MAC address 52:54:00:2e:b2:44
	I0828 17:16:02.957816   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Using SSH client type: external
	I0828 17:16:02.957844   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa (-rw-------)
	I0828 17:16:02.957887   29200 main.go:141] libmachine: (ha-240486-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 17:16:02.957909   29200 main.go:141] libmachine: (ha-240486-m03) DBG | About to run SSH command:
	I0828 17:16:02.957927   29200 main.go:141] libmachine: (ha-240486-m03) DBG | exit 0
	I0828 17:16:02.962359   29200 main.go:141] libmachine: (ha-240486-m03) DBG | SSH cmd err, output: exit status 255: 
	I0828 17:16:02.962386   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0828 17:16:02.962395   29200 main.go:141] libmachine: (ha-240486-m03) DBG | command : exit 0
	I0828 17:16:02.962404   29200 main.go:141] libmachine: (ha-240486-m03) DBG | err     : exit status 255
	I0828 17:16:02.962412   29200 main.go:141] libmachine: (ha-240486-m03) DBG | output  : 
	I0828 17:16:05.963328   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Getting to WaitForSSH function...
	I0828 17:16:05.965582   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:05.965990   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:05.966018   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:05.966144   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Using SSH client type: external
	I0828 17:16:05.966167   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa (-rw-------)
	I0828 17:16:05.966227   29200 main.go:141] libmachine: (ha-240486-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 17:16:05.966259   29200 main.go:141] libmachine: (ha-240486-m03) DBG | About to run SSH command:
	I0828 17:16:05.966276   29200 main.go:141] libmachine: (ha-240486-m03) DBG | exit 0
	I0828 17:16:06.090307   29200 main.go:141] libmachine: (ha-240486-m03) DBG | SSH cmd err, output: <nil>: 
	I0828 17:16:06.090633   29200 main.go:141] libmachine: (ha-240486-m03) KVM machine creation complete!
	I0828 17:16:06.090884   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetConfigRaw
	I0828 17:16:06.091476   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:06.091736   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:06.091895   29200 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 17:16:06.091913   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetState
	I0828 17:16:06.093159   29200 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 17:16:06.093173   29200 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 17:16:06.093179   29200 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 17:16:06.093188   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.095269   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.095642   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.095670   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.095771   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.095940   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.096105   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.096258   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.096461   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:06.096735   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:06.096752   29200 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 17:16:06.197511   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:16:06.197538   29200 main.go:141] libmachine: Detecting the provisioner...
	I0828 17:16:06.197552   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.200467   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.200905   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.200934   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.201099   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.201280   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.201411   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.201583   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.201742   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:06.201946   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:06.201960   29200 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 17:16:06.310570   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 17:16:06.310643   29200 main.go:141] libmachine: found compatible host: buildroot
	I0828 17:16:06.310656   29200 main.go:141] libmachine: Provisioning with buildroot...
	I0828 17:16:06.310670   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetMachineName
	I0828 17:16:06.310918   29200 buildroot.go:166] provisioning hostname "ha-240486-m03"
	I0828 17:16:06.310941   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetMachineName
	I0828 17:16:06.311113   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.313515   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.313894   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.313919   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.314028   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.314231   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.314418   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.314621   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.314804   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:06.314959   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:06.314972   29200 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-240486-m03 && echo "ha-240486-m03" | sudo tee /etc/hostname
	I0828 17:16:06.431268   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-240486-m03
	
	I0828 17:16:06.431296   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.434406   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.434790   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.434824   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.435027   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.435226   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.435413   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.435564   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.435751   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:06.435920   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:06.435935   29200 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-240486-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-240486-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-240486-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:16:06.546579   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:16:06.546611   29200 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 17:16:06.546629   29200 buildroot.go:174] setting up certificates
	I0828 17:16:06.546639   29200 provision.go:84] configureAuth start
	I0828 17:16:06.546647   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetMachineName
	I0828 17:16:06.546913   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:16:06.549427   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.549904   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.549935   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.550116   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.552421   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.552770   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.552799   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.552909   29200 provision.go:143] copyHostCerts
	I0828 17:16:06.552942   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:16:06.552978   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 17:16:06.552987   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:16:06.553070   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 17:16:06.553168   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:16:06.553197   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 17:16:06.553207   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:16:06.553246   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 17:16:06.553295   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:16:06.553312   29200 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 17:16:06.553318   29200 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:16:06.553339   29200 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 17:16:06.553397   29200 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.ha-240486-m03 san=[127.0.0.1 192.168.39.28 ha-240486-m03 localhost minikube]
	I0828 17:16:06.591711   29200 provision.go:177] copyRemoteCerts
	I0828 17:16:06.591761   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:16:06.591782   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.594451   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.594917   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.594957   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.595083   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.595305   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.595445   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.595594   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:16:06.676118   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0828 17:16:06.676193   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 17:16:06.698865   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0828 17:16:06.698950   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0828 17:16:06.721497   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0828 17:16:06.721559   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 17:16:06.743986   29200 provision.go:87] duration metric: took 197.335179ms to configureAuth
	I0828 17:16:06.744022   29200 buildroot.go:189] setting minikube options for container-runtime
	I0828 17:16:06.744263   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:16:06.744340   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.747225   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.747573   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.747603   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.747794   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.747997   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.748195   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.748372   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.748562   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:06.748745   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:06.748767   29200 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 17:16:06.964653   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 17:16:06.964678   29200 main.go:141] libmachine: Checking connection to Docker...
	I0828 17:16:06.964687   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetURL
	I0828 17:16:06.965854   29200 main.go:141] libmachine: (ha-240486-m03) DBG | Using libvirt version 6000000
	I0828 17:16:06.967687   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.968051   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.968072   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.968223   29200 main.go:141] libmachine: Docker is up and running!
	I0828 17:16:06.968250   29200 main.go:141] libmachine: Reticulating splines...
	I0828 17:16:06.968256   29200 client.go:171] duration metric: took 27.424311592s to LocalClient.Create
	I0828 17:16:06.968278   29200 start.go:167] duration metric: took 27.424361459s to libmachine.API.Create "ha-240486"
	I0828 17:16:06.968291   29200 start.go:293] postStartSetup for "ha-240486-m03" (driver="kvm2")
	I0828 17:16:06.968305   29200 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:16:06.968331   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:06.968547   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:16:06.968576   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:06.970418   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.970723   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:06.970749   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:06.970870   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:06.971032   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:06.971150   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:06.971259   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:16:07.052135   29200 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:16:07.056138   29200 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 17:16:07.056163   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 17:16:07.056240   29200 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 17:16:07.056335   29200 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 17:16:07.056347   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /etc/ssl/certs/175282.pem
	I0828 17:16:07.056461   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 17:16:07.066071   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:16:07.089057   29200 start.go:296] duration metric: took 120.749316ms for postStartSetup
	I0828 17:16:07.089098   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetConfigRaw
	I0828 17:16:07.089669   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:16:07.092079   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.092440   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:07.092469   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.092732   29200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:16:07.092949   29200 start.go:128] duration metric: took 27.566995404s to createHost
	I0828 17:16:07.092975   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:07.095233   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.095535   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:07.095580   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.095708   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:07.095903   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:07.096056   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:07.096205   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:07.096422   29200 main.go:141] libmachine: Using SSH client type: native
	I0828 17:16:07.096632   29200 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I0828 17:16:07.096648   29200 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 17:16:07.198563   29200 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724865367.179990749
	
	I0828 17:16:07.198592   29200 fix.go:216] guest clock: 1724865367.179990749
	I0828 17:16:07.198603   29200 fix.go:229] Guest: 2024-08-28 17:16:07.179990749 +0000 UTC Remote: 2024-08-28 17:16:07.092961015 +0000 UTC m=+138.865457633 (delta=87.029734ms)
	I0828 17:16:07.198622   29200 fix.go:200] guest clock delta is within tolerance: 87.029734ms
	I0828 17:16:07.198632   29200 start.go:83] releasing machines lock for "ha-240486-m03", held for 27.672780347s
	I0828 17:16:07.198652   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:07.198921   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:16:07.201767   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.202197   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:07.202231   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.204670   29200 out.go:177] * Found network options:
	I0828 17:16:07.205999   29200 out.go:177]   - NO_PROXY=192.168.39.227,192.168.39.103
	W0828 17:16:07.207467   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	W0828 17:16:07.207496   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	I0828 17:16:07.207514   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:07.208065   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:07.208264   29200 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:16:07.208381   29200 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:16:07.208420   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	W0828 17:16:07.208456   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	W0828 17:16:07.208482   29200 proxy.go:119] fail to check proxy env: Error ip not in block
	I0828 17:16:07.208545   29200 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 17:16:07.208566   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:16:07.211258   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.211504   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.211681   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:07.211710   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.211874   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:07.212071   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:07.212265   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:07.212398   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:07.212420   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:07.212461   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:16:07.212575   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:16:07.212714   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:16:07.212888   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:16:07.213024   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:16:07.478026   29200 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 17:16:07.483696   29200 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 17:16:07.483750   29200 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:16:07.505666   29200 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 17:16:07.505693   29200 start.go:495] detecting cgroup driver to use...
	I0828 17:16:07.505747   29200 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 17:16:07.522613   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 17:16:07.536542   29200 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:16:07.536609   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:16:07.550287   29200 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:16:07.564020   29200 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:16:07.680205   29200 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:16:07.828463   29200 docker.go:233] disabling docker service ...
	I0828 17:16:07.828523   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:16:07.841867   29200 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:16:07.854340   29200 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:16:07.987258   29200 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:16:08.095512   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:16:08.108828   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:16:08.125742   29200 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 17:16:08.125807   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.135295   29200 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 17:16:08.135363   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.144580   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.153785   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.163132   29200 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:16:08.176566   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.186664   29200 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.202268   29200 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:16:08.212012   29200 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:16:08.220505   29200 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 17:16:08.220560   29200 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 17:16:08.233919   29200 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:16:08.243089   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:16:08.348646   29200 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 17:16:08.436411   29200 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 17:16:08.436489   29200 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 17:16:08.440859   29200 start.go:563] Will wait 60s for crictl version
	I0828 17:16:08.440918   29200 ssh_runner.go:195] Run: which crictl
	I0828 17:16:08.444665   29200 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:16:08.485561   29200 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 17:16:08.485636   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:16:08.512223   29200 ssh_runner.go:195] Run: crio --version
	I0828 17:16:08.541846   29200 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 17:16:08.543241   29200 out.go:177]   - env NO_PROXY=192.168.39.227
	I0828 17:16:08.544487   29200 out.go:177]   - env NO_PROXY=192.168.39.227,192.168.39.103
	I0828 17:16:08.545568   29200 main.go:141] libmachine: (ha-240486-m03) Calling .GetIP
	I0828 17:16:08.548178   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:08.548583   29200 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:16:08.548611   29200 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:16:08.548795   29200 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 17:16:08.552944   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:16:08.565072   29200 mustload.go:65] Loading cluster: ha-240486
	I0828 17:16:08.565312   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:16:08.565625   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:16:08.565664   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:16:08.581402   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
	I0828 17:16:08.581843   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:16:08.582342   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:16:08.582370   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:16:08.582727   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:16:08.582912   29200 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:16:08.584362   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:16:08.584649   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:16:08.584683   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:16:08.601185   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37593
	I0828 17:16:08.601556   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:16:08.601984   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:16:08.602004   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:16:08.602324   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:16:08.602512   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:16:08.602712   29200 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486 for IP: 192.168.39.28
	I0828 17:16:08.602725   29200 certs.go:194] generating shared ca certs ...
	I0828 17:16:08.602741   29200 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:16:08.602883   29200 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 17:16:08.602962   29200 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 17:16:08.602974   29200 certs.go:256] generating profile certs ...
	I0828 17:16:08.603069   29200 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key
	I0828 17:16:08.603100   29200 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.94f0a6b9
	I0828 17:16:08.603119   29200 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.94f0a6b9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.103 192.168.39.28 192.168.39.254]
	I0828 17:16:08.726654   29200 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.94f0a6b9 ...
	I0828 17:16:08.726683   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.94f0a6b9: {Name:mk7b521344b243403383813c675a0854fb8cab41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:16:08.726872   29200 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.94f0a6b9 ...
	I0828 17:16:08.726889   29200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.94f0a6b9: {Name:mk8d14edb46ee42a5ec5b7143c6e1b74d0a4bd2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:16:08.726980   29200 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.94f0a6b9 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt
	I0828 17:16:08.727154   29200 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.94f0a6b9 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key
	I0828 17:16:08.727337   29200 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key
	I0828 17:16:08.727356   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0828 17:16:08.727374   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0828 17:16:08.727400   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0828 17:16:08.727418   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0828 17:16:08.727435   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0828 17:16:08.727452   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0828 17:16:08.727469   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0828 17:16:08.727486   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0828 17:16:08.727552   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 17:16:08.727591   29200 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 17:16:08.727604   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:16:08.727645   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 17:16:08.727674   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:16:08.727705   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 17:16:08.727761   29200 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:16:08.727795   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:16:08.727814   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem -> /usr/share/ca-certificates/17528.pem
	I0828 17:16:08.727833   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /usr/share/ca-certificates/175282.pem
	I0828 17:16:08.727871   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:16:08.730779   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:16:08.731196   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:16:08.731226   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:16:08.731361   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:16:08.731559   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:16:08.731728   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:16:08.731884   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:16:08.806441   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0828 17:16:08.811394   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0828 17:16:08.822312   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0828 17:16:08.826934   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0828 17:16:08.837874   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0828 17:16:08.841762   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0828 17:16:08.853116   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0828 17:16:08.857249   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0828 17:16:08.866997   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0828 17:16:08.870701   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0828 17:16:08.879828   29200 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0828 17:16:08.883604   29200 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0828 17:16:08.893126   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:16:08.917041   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 17:16:08.941523   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:16:08.963919   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:16:08.986263   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0828 17:16:09.009214   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 17:16:09.034619   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:16:09.059992   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 17:16:09.084963   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:16:09.109712   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 17:16:09.131789   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 17:16:09.153702   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0828 17:16:09.168749   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0828 17:16:09.184636   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0828 17:16:09.200091   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0828 17:16:09.215008   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0828 17:16:09.230529   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0828 17:16:09.246295   29200 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0828 17:16:09.261861   29200 ssh_runner.go:195] Run: openssl version
	I0828 17:16:09.267139   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:16:09.276755   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:16:09.280738   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:16:09.280786   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:16:09.286057   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:16:09.295691   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 17:16:09.305444   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 17:16:09.309439   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:16:09.309495   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 17:16:09.314706   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 17:16:09.324354   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 17:16:09.334045   29200 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 17:16:09.338635   29200 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:16:09.338694   29200 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 17:16:09.343970   29200 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 17:16:09.353891   29200 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:16:09.357712   29200 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 17:16:09.357772   29200 kubeadm.go:934] updating node {m03 192.168.39.28 8443 v1.31.0 crio true true} ...
	I0828 17:16:09.357872   29200 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-240486-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:16:09.357909   29200 kube-vip.go:115] generating kube-vip config ...
	I0828 17:16:09.357960   29200 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0828 17:16:09.374847   29200 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0828 17:16:09.374907   29200 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0828 17:16:09.374958   29200 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 17:16:09.384037   29200 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0828 17:16:09.384089   29200 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0828 17:16:09.392959   29200 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0828 17:16:09.392977   29200 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0828 17:16:09.392988   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0828 17:16:09.392996   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0828 17:16:09.392960   29200 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0828 17:16:09.393060   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0828 17:16:09.393049   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0828 17:16:09.393100   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:16:09.409197   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0828 17:16:09.409237   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0828 17:16:09.409260   29200 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0828 17:16:09.409314   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0828 17:16:09.409335   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0828 17:16:09.409343   29200 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0828 17:16:09.442382   29200 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0828 17:16:09.442426   29200 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0828 17:16:10.291909   29200 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0828 17:16:10.302278   29200 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0828 17:16:10.319531   29200 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:16:10.336811   29200 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0828 17:16:10.353567   29200 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0828 17:16:10.357434   29200 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:16:10.369450   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:16:10.477921   29200 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:16:10.493652   29200 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:16:10.493999   29200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:16:10.494038   29200 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:16:10.512191   29200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0828 17:16:10.512614   29200 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:16:10.513055   29200 main.go:141] libmachine: Using API Version  1
	I0828 17:16:10.513081   29200 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:16:10.513416   29200 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:16:10.513601   29200 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:16:10.513758   29200 start.go:317] joinCluster: &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:16:10.513880   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0828 17:16:10.513897   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:16:10.516326   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:16:10.516806   29200 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:16:10.516830   29200 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:16:10.516997   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:16:10.517137   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:16:10.517271   29200 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:16:10.517451   29200 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:16:10.663120   29200 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:16:10.663172   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5l4itz.xeascawi8wyu6ziv --discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-240486-m03 --control-plane --apiserver-advertise-address=192.168.39.28 --apiserver-bind-port=8443"
	I0828 17:16:34.010919   29200 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 5l4itz.xeascawi8wyu6ziv --discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-240486-m03 --control-plane --apiserver-advertise-address=192.168.39.28 --apiserver-bind-port=8443": (23.34771997s)
	I0828 17:16:34.010954   29200 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0828 17:16:34.433957   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-240486-m03 minikube.k8s.io/updated_at=2024_08_28T17_16_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=ha-240486 minikube.k8s.io/primary=false
	I0828 17:16:34.596941   29200 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-240486-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0828 17:16:34.718821   29200 start.go:319] duration metric: took 24.205058483s to joinCluster
	I0828 17:16:34.718905   29200 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:16:34.719248   29200 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:16:34.720195   29200 out.go:177] * Verifying Kubernetes components...
	I0828 17:16:34.721391   29200 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:16:34.929245   29200 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:16:34.947136   29200 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:16:34.947467   29200 kapi.go:59] client config for ha-240486: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.crt", KeyFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key", CAFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0828 17:16:34.947551   29200 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.227:8443
	I0828 17:16:34.947825   29200 node_ready.go:35] waiting up to 6m0s for node "ha-240486-m03" to be "Ready" ...
	I0828 17:16:34.947925   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:34.947936   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:34.947948   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:34.947960   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:34.951547   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:35.448825   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:35.448852   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:35.448864   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:35.448870   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:35.452064   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:35.948289   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:35.948311   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:35.948322   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:35.948326   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:35.951681   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:36.448834   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:36.448857   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:36.448866   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:36.448869   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:36.452315   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:36.948048   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:36.948071   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:36.948081   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:36.948087   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:36.951955   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:36.952483   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:37.448931   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:37.448953   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:37.448963   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:37.448970   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:37.452509   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:37.948330   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:37.948349   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:37.948359   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:37.948363   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:37.951947   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:38.448739   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:38.448768   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:38.448780   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:38.448785   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:38.451989   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:38.949001   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:38.949026   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:38.949036   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:38.949040   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:38.952828   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:38.953471   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:39.448839   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:39.448862   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:39.448872   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:39.448876   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:39.451920   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:39.948963   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:39.948999   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:39.949011   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:39.949016   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:39.952092   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:40.448115   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:40.448149   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:40.448166   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:40.448174   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:40.451580   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:40.948234   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:40.948258   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:40.948269   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:40.948275   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:40.951032   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:41.449101   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:41.449184   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:41.449200   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:41.449206   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:41.459709   29200 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0828 17:16:41.461843   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:41.948021   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:41.948046   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:41.948057   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:41.948063   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:41.951063   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:42.449029   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:42.449051   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:42.449060   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:42.449063   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:42.452173   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:42.949024   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:42.949045   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:42.949056   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:42.949065   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:42.953137   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:16:43.448737   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:43.448769   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:43.448779   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:43.448786   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:43.451797   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:43.948226   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:43.948249   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:43.948257   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:43.948261   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:43.951313   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:43.951973   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:44.448846   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:44.448870   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:44.448878   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:44.448883   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:44.451958   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:44.948937   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:44.948959   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:44.948967   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:44.948971   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:44.951966   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:45.449016   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:45.449041   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:45.449049   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:45.449052   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:45.452583   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:45.948773   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:45.948795   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:45.948804   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:45.948810   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:45.951869   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:45.952337   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:46.448804   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:46.448834   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:46.448846   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:46.448852   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:46.451969   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:46.948717   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:46.948742   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:46.948750   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:46.948754   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:46.953324   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:16:47.448254   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:47.448276   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:47.448289   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:47.448295   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:47.452156   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:47.948124   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:47.948148   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:47.948159   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:47.948165   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:47.952909   29200 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0828 17:16:47.953459   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:48.448719   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:48.448740   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:48.448748   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:48.448752   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:48.452018   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:48.948015   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:48.948043   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:48.948052   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:48.948056   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:48.950826   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:49.448227   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:49.448250   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:49.448258   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:49.448262   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:49.451879   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:49.948800   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:49.948821   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:49.948829   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:49.948833   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:49.952050   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:50.448004   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:50.448024   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:50.448032   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:50.448038   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:50.451346   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:50.451909   29200 node_ready.go:53] node "ha-240486-m03" has status "Ready":"False"
	I0828 17:16:50.948669   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:50.948695   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:50.948708   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:50.948715   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:50.952701   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:51.448003   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:51.448027   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:51.448035   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:51.448040   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:51.451036   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:51.948963   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:51.948983   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:51.948991   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:51.948994   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:51.951688   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.448612   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:52.448641   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.448654   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.448665   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.451662   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.452122   29200 node_ready.go:49] node "ha-240486-m03" has status "Ready":"True"
	I0828 17:16:52.452141   29200 node_ready.go:38] duration metric: took 17.504298399s for node "ha-240486-m03" to be "Ready" ...
	I0828 17:16:52.452151   29200 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:16:52.452216   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:16:52.452230   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.452240   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.452246   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.458514   29200 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0828 17:16:52.465149   29200 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.465243   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-wtzml
	I0828 17:16:52.465255   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.465266   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.465271   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.467996   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.468617   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:52.468632   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.468639   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.468644   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.471395   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.471762   29200 pod_ready.go:93] pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:52.471779   29200 pod_ready.go:82] duration metric: took 6.604558ms for pod "coredns-6f6b679f8f-wtzml" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.471788   29200 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.471833   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-x562s
	I0828 17:16:52.471841   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.471847   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.471851   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.474021   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.474714   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:52.474727   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.474734   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.474738   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.476781   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.477183   29200 pod_ready.go:93] pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:52.477201   29200 pod_ready.go:82] duration metric: took 5.406335ms for pod "coredns-6f6b679f8f-x562s" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.477214   29200 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.477266   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486
	I0828 17:16:52.477277   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.477287   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.477294   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.479394   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.479851   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:52.479863   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.479870   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.479873   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.481735   29200 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0828 17:16:52.482221   29200 pod_ready.go:93] pod "etcd-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:52.482237   29200 pod_ready.go:82] duration metric: took 5.01562ms for pod "etcd-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.482248   29200 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.482304   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486-m02
	I0828 17:16:52.482314   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.482324   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.482333   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.484876   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.485297   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:52.485312   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.485322   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.485327   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.487514   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:52.487912   29200 pod_ready.go:93] pod "etcd-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:52.487927   29200 pod_ready.go:82] duration metric: took 5.67224ms for pod "etcd-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.487934   29200 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.649329   29200 request.go:632] Waited for 161.343759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486-m03
	I0828 17:16:52.649421   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/etcd-ha-240486-m03
	I0828 17:16:52.649433   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.649441   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.649447   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.652720   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:52.849578   29200 request.go:632] Waited for 196.340431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:52.849673   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:52.849680   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:52.849697   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:52.849704   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:52.853178   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:52.853893   29200 pod_ready.go:93] pod "etcd-ha-240486-m03" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:52.853915   29200 pod_ready.go:82] duration metric: took 365.973206ms for pod "etcd-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:52.853937   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:53.048932   29200 request.go:632] Waited for 194.927532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486
	I0828 17:16:53.049007   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486
	I0828 17:16:53.049013   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:53.049021   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:53.049030   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:53.052313   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:53.249340   29200 request.go:632] Waited for 196.380576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:53.249433   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:53.249439   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:53.249449   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:53.249458   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:53.253418   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:53.254118   29200 pod_ready.go:93] pod "kube-apiserver-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:53.254137   29200 pod_ready.go:82] duration metric: took 400.191683ms for pod "kube-apiserver-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:53.254150   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:53.448718   29200 request.go:632] Waited for 194.496513ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m02
	I0828 17:16:53.448773   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m02
	I0828 17:16:53.448778   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:53.448785   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:53.448789   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:53.452092   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:53.648618   29200 request.go:632] Waited for 195.775747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:53.648716   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:53.648728   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:53.648738   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:53.648742   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:53.652212   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:53.652756   29200 pod_ready.go:93] pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:53.652774   29200 pod_ready.go:82] duration metric: took 398.616132ms for pod "kube-apiserver-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:53.652786   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:53.848927   29200 request.go:632] Waited for 196.04388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m03
	I0828 17:16:53.848989   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-240486-m03
	I0828 17:16:53.848996   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:53.849006   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:53.849017   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:53.852769   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.048790   29200 request.go:632] Waited for 195.282477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:54.048874   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:54.048883   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:54.048891   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:54.048896   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:54.052238   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.053003   29200 pod_ready.go:93] pod "kube-apiserver-ha-240486-m03" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:54.053024   29200 pod_ready.go:82] duration metric: took 400.227358ms for pod "kube-apiserver-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:54.053037   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:54.249140   29200 request.go:632] Waited for 196.038014ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486
	I0828 17:16:54.249209   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486
	I0828 17:16:54.249216   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:54.249224   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:54.249236   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:54.252312   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.449412   29200 request.go:632] Waited for 196.369336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:54.449483   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:54.449488   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:54.449495   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:54.449499   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:54.452556   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.452976   29200 pod_ready.go:93] pod "kube-controller-manager-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:54.452994   29200 pod_ready.go:82] duration metric: took 399.949839ms for pod "kube-controller-manager-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:54.453003   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:54.649574   29200 request.go:632] Waited for 196.481532ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m02
	I0828 17:16:54.649640   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m02
	I0828 17:16:54.649646   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:54.649654   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:54.649658   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:54.653202   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.849043   29200 request.go:632] Waited for 195.224597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:54.849092   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:54.849097   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:54.849108   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:54.849113   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:54.852286   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:54.852964   29200 pod_ready.go:93] pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:54.852987   29200 pod_ready.go:82] duration metric: took 399.974077ms for pod "kube-controller-manager-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:54.853002   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:55.049516   29200 request.go:632] Waited for 196.439033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m03
	I0828 17:16:55.049570   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-240486-m03
	I0828 17:16:55.049575   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:55.049582   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:55.049588   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:55.052994   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:55.249029   29200 request.go:632] Waited for 195.36517ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:55.249100   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:55.249108   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:55.249120   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:55.249127   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:55.252059   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:55.252757   29200 pod_ready.go:93] pod "kube-controller-manager-ha-240486-m03" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:55.252776   29200 pod_ready.go:82] duration metric: took 399.764707ms for pod "kube-controller-manager-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:55.252790   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4w7tt" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:55.448810   29200 request.go:632] Waited for 195.952202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4w7tt
	I0828 17:16:55.448891   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4w7tt
	I0828 17:16:55.448897   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:55.448905   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:55.448910   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:55.452174   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:55.649253   29200 request.go:632] Waited for 196.378674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:55.649328   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:55.649336   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:55.649347   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:55.649370   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:55.652255   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:55.652849   29200 pod_ready.go:93] pod "kube-proxy-4w7tt" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:55.652865   29200 pod_ready.go:82] duration metric: took 400.068294ms for pod "kube-proxy-4w7tt" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:55.652874   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jdnzs" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:55.849052   29200 request.go:632] Waited for 196.115456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jdnzs
	I0828 17:16:55.849137   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jdnzs
	I0828 17:16:55.849146   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:55.849157   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:55.849163   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:55.852354   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.049440   29200 request.go:632] Waited for 196.352699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:56.049507   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:56.049512   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:56.049520   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:56.049525   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:56.052552   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.053050   29200 pod_ready.go:93] pod "kube-proxy-jdnzs" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:56.053068   29200 pod_ready.go:82] duration metric: took 400.187423ms for pod "kube-proxy-jdnzs" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:56.053081   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ktw9z" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:56.249138   29200 request.go:632] Waited for 195.985128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktw9z
	I0828 17:16:56.249229   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-proxy-ktw9z
	I0828 17:16:56.249240   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:56.249252   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:56.249263   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:56.252728   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.449673   29200 request.go:632] Waited for 196.397721ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:56.449724   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:56.449729   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:56.449737   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:56.449742   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:56.452895   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.453443   29200 pod_ready.go:93] pod "kube-proxy-ktw9z" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:56.453460   29200 pod_ready.go:82] duration metric: took 400.371434ms for pod "kube-proxy-ktw9z" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:56.453468   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:56.649618   29200 request.go:632] Waited for 196.078175ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486
	I0828 17:16:56.649671   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486
	I0828 17:16:56.649676   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:56.649686   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:56.649693   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:56.653175   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.848936   29200 request.go:632] Waited for 195.219368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:56.849028   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486
	I0828 17:16:56.849039   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:56.849047   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:56.849050   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:56.852177   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:56.852638   29200 pod_ready.go:93] pod "kube-scheduler-ha-240486" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:56.852660   29200 pod_ready.go:82] duration metric: took 399.184775ms for pod "kube-scheduler-ha-240486" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:56.852677   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:57.049552   29200 request.go:632] Waited for 196.789794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m02
	I0828 17:16:57.049607   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m02
	I0828 17:16:57.049620   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.049629   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.049633   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.052639   29200 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0828 17:16:57.249603   29200 request.go:632] Waited for 196.390918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:57.249663   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m02
	I0828 17:16:57.249669   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.249676   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.249680   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.252880   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:57.253381   29200 pod_ready.go:93] pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:57.253399   29200 pod_ready.go:82] duration metric: took 400.711283ms for pod "kube-scheduler-ha-240486-m02" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:57.253408   29200 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:57.449469   29200 request.go:632] Waited for 195.958076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m03
	I0828 17:16:57.449541   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-240486-m03
	I0828 17:16:57.449557   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.449569   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.449577   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.453113   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:57.649166   29200 request.go:632] Waited for 195.360322ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:57.649218   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes/ha-240486-m03
	I0828 17:16:57.649223   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.649231   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.649234   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.652459   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:57.652965   29200 pod_ready.go:93] pod "kube-scheduler-ha-240486-m03" in "kube-system" namespace has status "Ready":"True"
	I0828 17:16:57.652982   29200 pod_ready.go:82] duration metric: took 399.56894ms for pod "kube-scheduler-ha-240486-m03" in "kube-system" namespace to be "Ready" ...
	I0828 17:16:57.652993   29200 pod_ready.go:39] duration metric: took 5.20083003s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:16:57.653006   29200 api_server.go:52] waiting for apiserver process to appear ...
	I0828 17:16:57.653056   29200 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:16:57.672165   29200 api_server.go:72] duration metric: took 22.953223062s to wait for apiserver process to appear ...
	I0828 17:16:57.672193   29200 api_server.go:88] waiting for apiserver healthz status ...
	I0828 17:16:57.672211   29200 api_server.go:253] Checking apiserver healthz at https://192.168.39.227:8443/healthz ...
	I0828 17:16:57.676355   29200 api_server.go:279] https://192.168.39.227:8443/healthz returned 200:
	ok
	I0828 17:16:57.676423   29200 round_trippers.go:463] GET https://192.168.39.227:8443/version
	I0828 17:16:57.676433   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.676444   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.676452   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.677394   29200 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0828 17:16:57.677452   29200 api_server.go:141] control plane version: v1.31.0
	I0828 17:16:57.677468   29200 api_server.go:131] duration metric: took 5.26686ms to wait for apiserver health ...
	I0828 17:16:57.677480   29200 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 17:16:57.848823   29200 request.go:632] Waited for 171.25665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:16:57.848874   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:16:57.848880   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:57.848887   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:57.848892   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:57.854303   29200 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0828 17:16:57.860479   29200 system_pods.go:59] 24 kube-system pods found
	I0828 17:16:57.860505   29200 system_pods.go:61] "coredns-6f6b679f8f-wtzml" [424f87f7-0221-432d-a04f-8f276386be98] Running
	I0828 17:16:57.860510   29200 system_pods.go:61] "coredns-6f6b679f8f-x562s" [78fab040-ae1a-425e-9dc5-e10594b84b9f] Running
	I0828 17:16:57.860514   29200 system_pods.go:61] "etcd-ha-240486" [8a6cf9e2-f806-44ae-b6ef-2a522dc2f516] Running
	I0828 17:16:57.860517   29200 system_pods.go:61] "etcd-ha-240486-m02" [2053f850-310f-46b3-b3d0-a2dbcf97dd70] Running
	I0828 17:16:57.860520   29200 system_pods.go:61] "etcd-ha-240486-m03" [a43d3636-8296-40e7-8975-fb113ef5e8db] Running
	I0828 17:16:57.860523   29200 system_pods.go:61] "kindnet-bgr7f" [8c938a5d-5f3b-487b-a422-94cfda96c35d] Running
	I0828 17:16:57.860527   29200 system_pods.go:61] "kindnet-pb8m7" [67180991-ca3a-4cfb-ba43-919c64d68657] Running
	I0828 17:16:57.860530   29200 system_pods.go:61] "kindnet-q9q9q" [2915b192-297e-4d73-802a-37660942c8c1] Running
	I0828 17:16:57.860533   29200 system_pods.go:61] "kube-apiserver-ha-240486" [e2c0b6cc-87e7-4ae4-823f-c51b100d056d] Running
	I0828 17:16:57.860538   29200 system_pods.go:61] "kube-apiserver-ha-240486-m02" [ead49a23-e0f0-4f8f-b327-6cd1d648ff65] Running
	I0828 17:16:57.860541   29200 system_pods.go:61] "kube-apiserver-ha-240486-m03" [9d4a7b86-acd1-4cbd-a97b-1a3269adeff7] Running
	I0828 17:16:57.860544   29200 system_pods.go:61] "kube-controller-manager-ha-240486" [1b0f6cba-56b3-4e54-b3fc-d5dba431f647] Running
	I0828 17:16:57.860549   29200 system_pods.go:61] "kube-controller-manager-ha-240486-m02" [20c49f1a-4f3d-4ed1-bca3-7efa53c61e4e] Running
	I0828 17:16:57.860552   29200 system_pods.go:61] "kube-controller-manager-ha-240486-m03" [cad610de-6a16-4347-9f6a-8d8a8b5bda54] Running
	I0828 17:16:57.860556   29200 system_pods.go:61] "kube-proxy-4w7tt" [5188f77d-e0ea-4e42-a5c4-173a8d7680dd] Running
	I0828 17:16:57.860559   29200 system_pods.go:61] "kube-proxy-jdnzs" [9c500e4d-bea4-4389-aca7-ebf805f2e642] Running
	I0828 17:16:57.860562   29200 system_pods.go:61] "kube-proxy-ktw9z" [d53ddde6-1a83-498f-90bb-ea71dce1d595] Running
	I0828 17:16:57.860565   29200 system_pods.go:61] "kube-scheduler-ha-240486" [ca5398d3-c263-4a18-9f9e-554bf50bf7d4] Running
	I0828 17:16:57.860568   29200 system_pods.go:61] "kube-scheduler-ha-240486-m02" [030ee5b8-449b-48ed-aaf4-ff4afeb8cae2] Running
	I0828 17:16:57.860570   29200 system_pods.go:61] "kube-scheduler-ha-240486-m03" [73dc0f31-c42b-4ee4-8d92-8ac9f09d2f06] Running
	I0828 17:16:57.860574   29200 system_pods.go:61] "kube-vip-ha-240486" [f1caf9b0-cb2f-462f-be58-ee158739bb79] Running
	I0828 17:16:57.860578   29200 system_pods.go:61] "kube-vip-ha-240486-m02" [909bf826-9c16-458a-8721-9e9ddc2eda22] Running
	I0828 17:16:57.860581   29200 system_pods.go:61] "kube-vip-ha-240486-m03" [86259d01-d574-4408-892a-ed17b0b74e91] Running
	I0828 17:16:57.860584   29200 system_pods.go:61] "storage-provisioner" [83a920cf-9505-4ae6-bd10-2582b38ee29b] Running
	I0828 17:16:57.860590   29200 system_pods.go:74] duration metric: took 183.101069ms to wait for pod list to return data ...
	I0828 17:16:57.860600   29200 default_sa.go:34] waiting for default service account to be created ...
	I0828 17:16:58.049034   29200 request.go:632] Waited for 188.361616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0828 17:16:58.049099   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/default/serviceaccounts
	I0828 17:16:58.049104   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:58.049111   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:58.049118   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:58.052878   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:58.052984   29200 default_sa.go:45] found service account: "default"
	I0828 17:16:58.052997   29200 default_sa.go:55] duration metric: took 192.392294ms for default service account to be created ...
	I0828 17:16:58.053004   29200 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 17:16:58.249510   29200 request.go:632] Waited for 196.434256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:16:58.249570   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/namespaces/kube-system/pods
	I0828 17:16:58.249577   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:58.249587   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:58.249597   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:58.257387   29200 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0828 17:16:58.263744   29200 system_pods.go:86] 24 kube-system pods found
	I0828 17:16:58.263769   29200 system_pods.go:89] "coredns-6f6b679f8f-wtzml" [424f87f7-0221-432d-a04f-8f276386be98] Running
	I0828 17:16:58.263775   29200 system_pods.go:89] "coredns-6f6b679f8f-x562s" [78fab040-ae1a-425e-9dc5-e10594b84b9f] Running
	I0828 17:16:58.263779   29200 system_pods.go:89] "etcd-ha-240486" [8a6cf9e2-f806-44ae-b6ef-2a522dc2f516] Running
	I0828 17:16:58.263783   29200 system_pods.go:89] "etcd-ha-240486-m02" [2053f850-310f-46b3-b3d0-a2dbcf97dd70] Running
	I0828 17:16:58.263786   29200 system_pods.go:89] "etcd-ha-240486-m03" [a43d3636-8296-40e7-8975-fb113ef5e8db] Running
	I0828 17:16:58.263790   29200 system_pods.go:89] "kindnet-bgr7f" [8c938a5d-5f3b-487b-a422-94cfda96c35d] Running
	I0828 17:16:58.263793   29200 system_pods.go:89] "kindnet-pb8m7" [67180991-ca3a-4cfb-ba43-919c64d68657] Running
	I0828 17:16:58.263797   29200 system_pods.go:89] "kindnet-q9q9q" [2915b192-297e-4d73-802a-37660942c8c1] Running
	I0828 17:16:58.263799   29200 system_pods.go:89] "kube-apiserver-ha-240486" [e2c0b6cc-87e7-4ae4-823f-c51b100d056d] Running
	I0828 17:16:58.263804   29200 system_pods.go:89] "kube-apiserver-ha-240486-m02" [ead49a23-e0f0-4f8f-b327-6cd1d648ff65] Running
	I0828 17:16:58.263810   29200 system_pods.go:89] "kube-apiserver-ha-240486-m03" [9d4a7b86-acd1-4cbd-a97b-1a3269adeff7] Running
	I0828 17:16:58.263815   29200 system_pods.go:89] "kube-controller-manager-ha-240486" [1b0f6cba-56b3-4e54-b3fc-d5dba431f647] Running
	I0828 17:16:58.263821   29200 system_pods.go:89] "kube-controller-manager-ha-240486-m02" [20c49f1a-4f3d-4ed1-bca3-7efa53c61e4e] Running
	I0828 17:16:58.263829   29200 system_pods.go:89] "kube-controller-manager-ha-240486-m03" [cad610de-6a16-4347-9f6a-8d8a8b5bda54] Running
	I0828 17:16:58.263835   29200 system_pods.go:89] "kube-proxy-4w7tt" [5188f77d-e0ea-4e42-a5c4-173a8d7680dd] Running
	I0828 17:16:58.263845   29200 system_pods.go:89] "kube-proxy-jdnzs" [9c500e4d-bea4-4389-aca7-ebf805f2e642] Running
	I0828 17:16:58.263850   29200 system_pods.go:89] "kube-proxy-ktw9z" [d53ddde6-1a83-498f-90bb-ea71dce1d595] Running
	I0828 17:16:58.263853   29200 system_pods.go:89] "kube-scheduler-ha-240486" [ca5398d3-c263-4a18-9f9e-554bf50bf7d4] Running
	I0828 17:16:58.263857   29200 system_pods.go:89] "kube-scheduler-ha-240486-m02" [030ee5b8-449b-48ed-aaf4-ff4afeb8cae2] Running
	I0828 17:16:58.263863   29200 system_pods.go:89] "kube-scheduler-ha-240486-m03" [73dc0f31-c42b-4ee4-8d92-8ac9f09d2f06] Running
	I0828 17:16:58.263867   29200 system_pods.go:89] "kube-vip-ha-240486" [f1caf9b0-cb2f-462f-be58-ee158739bb79] Running
	I0828 17:16:58.263872   29200 system_pods.go:89] "kube-vip-ha-240486-m02" [909bf826-9c16-458a-8721-9e9ddc2eda22] Running
	I0828 17:16:58.263877   29200 system_pods.go:89] "kube-vip-ha-240486-m03" [86259d01-d574-4408-892a-ed17b0b74e91] Running
	I0828 17:16:58.263882   29200 system_pods.go:89] "storage-provisioner" [83a920cf-9505-4ae6-bd10-2582b38ee29b] Running
	I0828 17:16:58.263888   29200 system_pods.go:126] duration metric: took 210.877499ms to wait for k8s-apps to be running ...
	I0828 17:16:58.263898   29200 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 17:16:58.263948   29200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:16:58.279096   29200 system_svc.go:56] duration metric: took 15.178702ms WaitForService to wait for kubelet
	I0828 17:16:58.279128   29200 kubeadm.go:582] duration metric: took 23.560183555s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:16:58.279150   29200 node_conditions.go:102] verifying NodePressure condition ...
	I0828 17:16:58.448629   29200 request.go:632] Waited for 169.400673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.227:8443/api/v1/nodes
	I0828 17:16:58.448688   29200 round_trippers.go:463] GET https://192.168.39.227:8443/api/v1/nodes
	I0828 17:16:58.448697   29200 round_trippers.go:469] Request Headers:
	I0828 17:16:58.448705   29200 round_trippers.go:473]     Accept: application/json, */*
	I0828 17:16:58.448709   29200 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0828 17:16:58.452448   29200 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0828 17:16:58.453479   29200 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 17:16:58.453499   29200 node_conditions.go:123] node cpu capacity is 2
	I0828 17:16:58.453510   29200 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 17:16:58.453514   29200 node_conditions.go:123] node cpu capacity is 2
	I0828 17:16:58.453518   29200 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 17:16:58.453521   29200 node_conditions.go:123] node cpu capacity is 2
	I0828 17:16:58.453525   29200 node_conditions.go:105] duration metric: took 174.369219ms to run NodePressure ...
	I0828 17:16:58.453535   29200 start.go:241] waiting for startup goroutines ...
	I0828 17:16:58.453554   29200 start.go:255] writing updated cluster config ...
	I0828 17:16:58.453813   29200 ssh_runner.go:195] Run: rm -f paused
	I0828 17:16:58.504720   29200 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 17:16:58.506500   29200 out.go:177] * Done! kubectl is now configured to use "ha-240486" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.950169064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=46c4a34c-d4ba-423f-aa93-069ab91190d2 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.951166592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb8c8ae3-bfbe-4449-bbe2-852be2175c7c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.951596180Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865686951574163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb8c8ae3-bfbe-4449-bbe2-852be2175c7c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.952151494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7604b8ee-e3f1-428c-858c-b821382ea137 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.952250786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7604b8ee-e3f1-428c-858c-b821382ea137 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.952531125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865423382452291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285217716166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285212462904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa8b2f45c32d1c7fe1af7e793aec51df9598c41c99ee687cd40be8d88331bfb,PodSandboxId:9d51ffa046dff43d72be361cb1094bda9fbf79e1f5066caf2d7feb976ad4b6f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724865285167345783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724865273106299231,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172486526
9335959801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e264b3c2fcf6e9bcb36188bf8220e3a34460fe9740c6d4df332c937aa3d73846,PodSandboxId:8e53bb3dd1994dc2372503ba92a8a802408f1416e548b5d45ad1e8f8561f566b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172486526115
1684318,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ebf3af9de277a23996fed4129df261,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865258185171797,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865258176716922,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c141f787017a9c1c78a3b63460b101bc27bc575301ce774a245737347724883,PodSandboxId:b76fb735e82a8692cfc2d9c329c6a34ad1f05e8244bf1fb47d71d835bf2492d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865258118460580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594ab811e29b5ad78a8c0a590f754540129775e0b65bf3fb8ac3d05808cf6dbe,PodSandboxId:4aed61a54422501596e720a1e40c4d9fb8370a25dcd03f271aafdadd956e8a24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865258133523886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7604b8ee-e3f1-428c-858c-b821382ea137 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.989037687Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8077081-08cc-41b6-8eb5-33fa862ea5e5 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.989111823Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8077081-08cc-41b6-8eb5-33fa862ea5e5 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.990285101Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb9e421c-a3d3-4dff-9650-f8bc20c902ba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.990902609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865686990876992,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb9e421c-a3d3-4dff-9650-f8bc20c902ba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.991544821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a87d405-4f41-4f68-8714-f619eb1b2016 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.991596160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a87d405-4f41-4f68-8714-f619eb1b2016 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:21:26 ha-240486 crio[664]: time="2024-08-28 17:21:26.991852082Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865423382452291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285217716166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285212462904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa8b2f45c32d1c7fe1af7e793aec51df9598c41c99ee687cd40be8d88331bfb,PodSandboxId:9d51ffa046dff43d72be361cb1094bda9fbf79e1f5066caf2d7feb976ad4b6f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724865285167345783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724865273106299231,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172486526
9335959801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e264b3c2fcf6e9bcb36188bf8220e3a34460fe9740c6d4df332c937aa3d73846,PodSandboxId:8e53bb3dd1994dc2372503ba92a8a802408f1416e548b5d45ad1e8f8561f566b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172486526115
1684318,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ebf3af9de277a23996fed4129df261,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865258185171797,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865258176716922,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c141f787017a9c1c78a3b63460b101bc27bc575301ce774a245737347724883,PodSandboxId:b76fb735e82a8692cfc2d9c329c6a34ad1f05e8244bf1fb47d71d835bf2492d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865258118460580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594ab811e29b5ad78a8c0a590f754540129775e0b65bf3fb8ac3d05808cf6dbe,PodSandboxId:4aed61a54422501596e720a1e40c4d9fb8370a25dcd03f271aafdadd956e8a24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865258133523886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a87d405-4f41-4f68-8714-f619eb1b2016 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.022128968Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=cd79fcc6-d79c-4ace-9892-9259a8539f8b name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.023072691Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-tnmmz,Uid:e4608982-afdd-491b-8fdb-ede6a6a4167a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724865419710993355,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-28T17:16:59.397299666Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-x562s,Uid:78fab040-ae1a-425e-9dc5-e10594b84b9f,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1724865284963060055,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-28T17:14:44.652010894Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d51ffa046dff43d72be361cb1094bda9fbf79e1f5066caf2d7feb976ad4b6f5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:83a920cf-9505-4ae6-bd10-2582b38ee29b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724865284960052128,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-28T17:14:44.651274589Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-wtzml,Uid:424f87f7-0221-432d-a04f-8f276386be98,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1724865284951041599,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-28T17:14:44.644406198Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&PodSandboxMetadata{Name:kube-proxy-jdnzs,Uid:9c500e4d-bea4-4389-aca7-ebf805f2e642,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724865268886615244,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-08-28T17:14:28.568791488Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&PodSandboxMetadata{Name:kindnet-pb8m7,Uid:67180991-ca3a-4cfb-ba43-919c64d68657,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724865268872850808,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-28T17:14:28.561955280Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4aed61a54422501596e720a1e40c4d9fb8370a25dcd03f271aafdadd956e8a24,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-240486,Uid:5262792087191096d4a2463307ef739d,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1724865257945523955,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5262792087191096d4a2463307ef739d,kubernetes.io/config.seen: 2024-08-28T17:14:17.461102370Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b76fb735e82a8692cfc2d9c329c6a34ad1f05e8244bf1fb47d71d835bf2492d2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-240486,Uid:393e4e8ab105af585ab1f9ebd5be80bc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724865257933343125,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80b
c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.227:8443,kubernetes.io/config.hash: 393e4e8ab105af585ab1f9ebd5be80bc,kubernetes.io/config.seen: 2024-08-28T17:14:17.461101124Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8e53bb3dd1994dc2372503ba92a8a802408f1416e548b5d45ad1e8f8561f566b,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-240486,Uid:a9ebf3af9de277a23996fed4129df261,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724865257929407855,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ebf3af9de277a23996fed4129df261,},Annotations:map[string]string{kubernetes.io/config.hash: a9ebf3af9de277a23996fed4129df261,kubernetes.io/config.seen: 2024-08-28T17:14:17.461096171Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e
4e8c2b35a2c61218a808276,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-240486,Uid:7eef3407dfda5c22c64bcead223dfe4f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724865257928073851,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7eef3407dfda5c22c64bcead223dfe4f,kubernetes.io/config.seen: 2024-08-28T17:14:17.461103791Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&PodSandboxMetadata{Name:etcd-ha-240486,Uid:4a055cdc0d382d6b916dd8109df393b3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1724865257919187465,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-240486,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.227:2379,kubernetes.io/config.hash: 4a055cdc0d382d6b916dd8109df393b3,kubernetes.io/config.seen: 2024-08-28T17:14:17.461099925Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=cd79fcc6-d79c-4ace-9892-9259a8539f8b name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.023672800Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b926d4a-9ef1-42e1-95ff-a1cb82758874 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.023742193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b926d4a-9ef1-42e1-95ff-a1cb82758874 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.023996574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865423382452291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285217716166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285212462904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa8b2f45c32d1c7fe1af7e793aec51df9598c41c99ee687cd40be8d88331bfb,PodSandboxId:9d51ffa046dff43d72be361cb1094bda9fbf79e1f5066caf2d7feb976ad4b6f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724865285167345783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724865273106299231,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172486526
9335959801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e264b3c2fcf6e9bcb36188bf8220e3a34460fe9740c6d4df332c937aa3d73846,PodSandboxId:8e53bb3dd1994dc2372503ba92a8a802408f1416e548b5d45ad1e8f8561f566b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172486526115
1684318,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ebf3af9de277a23996fed4129df261,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865258185171797,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865258176716922,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c141f787017a9c1c78a3b63460b101bc27bc575301ce774a245737347724883,PodSandboxId:b76fb735e82a8692cfc2d9c329c6a34ad1f05e8244bf1fb47d71d835bf2492d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865258118460580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594ab811e29b5ad78a8c0a590f754540129775e0b65bf3fb8ac3d05808cf6dbe,PodSandboxId:4aed61a54422501596e720a1e40c4d9fb8370a25dcd03f271aafdadd956e8a24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865258133523886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b926d4a-9ef1-42e1-95ff-a1cb82758874 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.030431656Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a49a82cc-11cb-4f7c-a788-37b6b917df37 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.030501765Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a49a82cc-11cb-4f7c-a788-37b6b917df37 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.031630565Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a26afb2d-48cb-47f0-b635-a32e2621ecf2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.032094818Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865687032072316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a26afb2d-48cb-47f0-b635-a32e2621ecf2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.032707516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da5fe6e8-658b-456f-b2e6-04c6592ee875 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.032760489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da5fe6e8-658b-456f-b2e6-04c6592ee875 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:21:27 ha-240486 crio[664]: time="2024-08-28 17:21:27.033167758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865423382452291,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285217716166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865285212462904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3aa8b2f45c32d1c7fe1af7e793aec51df9598c41c99ee687cd40be8d88331bfb,PodSandboxId:9d51ffa046dff43d72be361cb1094bda9fbf79e1f5066caf2d7feb976ad4b6f5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1724865285167345783,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1724865273106299231,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172486526
9335959801,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e264b3c2fcf6e9bcb36188bf8220e3a34460fe9740c6d4df332c937aa3d73846,PodSandboxId:8e53bb3dd1994dc2372503ba92a8a802408f1416e548b5d45ad1e8f8561f566b,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172486526115
1684318,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9ebf3af9de277a23996fed4129df261,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865258185171797,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865258176716922,Labels:map[string]string{io.kubernetes.container.name: e
tcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c141f787017a9c1c78a3b63460b101bc27bc575301ce774a245737347724883,PodSandboxId:b76fb735e82a8692cfc2d9c329c6a34ad1f05e8244bf1fb47d71d835bf2492d2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865258118460580,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594ab811e29b5ad78a8c0a590f754540129775e0b65bf3fb8ac3d05808cf6dbe,PodSandboxId:4aed61a54422501596e720a1e40c4d9fb8370a25dcd03f271aafdadd956e8a24,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865258133523886,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da5fe6e8-658b-456f-b2e6-04c6592ee875 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d5a3adee06612       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   23adeed9e41e9       busybox-7dff88458-tnmmz
	687020da7d252       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   375c7b919327c       coredns-6f6b679f8f-x562s
	5171fb49fa83b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   2efd086107969       coredns-6f6b679f8f-wtzml
	3aa8b2f45c32d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   9d51ffa046dff       storage-provisioner
	a200b18d5b49f       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    6 minutes ago       Running             kindnet-cni               0                   0d9937bfda982       kindnet-pb8m7
	5da7c6652ad91       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      6 minutes ago       Running             kube-proxy                0                   762e2586bed26       kube-proxy-jdnzs
	e264b3c2fcf6e       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   8e53bb3dd1994       kube-vip-ha-240486
	1396de2dd1902       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   98bba66b20012       kube-scheduler-ha-240486
	6006f9215c80c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   2280901ed00fa       etcd-ha-240486
	594ab811e29b5       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   4aed61a544225       kube-controller-manager-ha-240486
	6c141f787017a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   b76fb735e82a8       kube-apiserver-ha-240486
	
	
	==> coredns [5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc] <==
	[INFO] 10.244.0.4:54948 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000096404s
	[INFO] 10.244.0.4:39957 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002328593s
	[INFO] 10.244.1.2:42445 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000451167s
	[INFO] 10.244.3.2:36990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118942s
	[INFO] 10.244.3.2:49081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261149s
	[INFO] 10.244.3.2:35420 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157575s
	[INFO] 10.244.3.2:45145 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000273687s
	[INFO] 10.244.0.4:59568 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001810378s
	[INFO] 10.244.1.2:40640 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138766s
	[INFO] 10.244.1.2:36403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155827s
	[INFO] 10.244.1.2:57247 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096044s
	[INFO] 10.244.3.2:58745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021909s
	[INFO] 10.244.3.2:52666 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001012s
	[INFO] 10.244.3.2:55195 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202518s
	[INFO] 10.244.0.4:50754 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164536s
	[INFO] 10.244.0.4:52876 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113989s
	[INFO] 10.244.1.2:43752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149181s
	[INFO] 10.244.1.2:39336 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272379s
	[INFO] 10.244.1.2:54086 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000180306s
	[INFO] 10.244.1.2:35731 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186612s
	[INFO] 10.244.3.2:38396 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014603s
	[INFO] 10.244.3.2:37082 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155781s
	[INFO] 10.244.0.4:42529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117311s
	[INFO] 10.244.0.4:54981 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113539s
	[INFO] 10.244.0.4:46325 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065905s
	
	
	==> coredns [687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342] <==
	[INFO] 10.244.1.2:51840 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159319s
	[INFO] 10.244.1.2:45908 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003464807s
	[INFO] 10.244.1.2:45832 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227329s
	[INFO] 10.244.1.2:55717 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010110062s
	[INFO] 10.244.1.2:36777 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189682s
	[INFO] 10.244.1.2:33751 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105145s
	[INFO] 10.244.1.2:34860 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088194s
	[INFO] 10.244.3.2:43474 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001844418s
	[INFO] 10.244.3.2:42113 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123683s
	[INFO] 10.244.3.2:54119 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001316499s
	[INFO] 10.244.3.2:41393 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061254s
	[INFO] 10.244.0.4:35761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174103s
	[INFO] 10.244.0.4:35492 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135318s
	[INFO] 10.244.0.4:41816 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037492s
	[INFO] 10.244.0.4:56198 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00165456s
	[INFO] 10.244.0.4:42294 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000034332s
	[INFO] 10.244.0.4:49049 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062307s
	[INFO] 10.244.0.4:43851 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000033836s
	[INFO] 10.244.1.2:53375 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119804s
	[INFO] 10.244.3.2:50434 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105903s
	[INFO] 10.244.0.4:41203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063169s
	[INFO] 10.244.0.4:51605 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004099s
	[INFO] 10.244.3.2:53550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157853s
	[INFO] 10.244.3.2:55570 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000261867s
	[INFO] 10.244.0.4:50195 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000278101s
	
	
	==> describe nodes <==
	Name:               ha-240486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T17_14_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:14:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:21:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:17:27 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:17:27 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:17:27 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:17:27 +0000   Wed, 28 Aug 2024 17:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-240486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b73dbe7f63fd4c3baf977a4b53641230
	  System UUID:                b73dbe7f-63fd-4c3b-af97-7a4b53641230
	  Boot ID:                    cb154fe5-0aad-4938-bd54-d2af34922b1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tnmmz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 coredns-6f6b679f8f-wtzml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m59s
	  kube-system                 coredns-6f6b679f8f-x562s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m59s
	  kube-system                 etcd-ha-240486                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m3s
	  kube-system                 kindnet-pb8m7                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m59s
	  kube-system                 kube-apiserver-ha-240486             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m3s
	  kube-system                 kube-controller-manager-ha-240486    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m4s
	  kube-system                 kube-proxy-jdnzs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m59s
	  kube-system                 kube-scheduler-ha-240486             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m3s
	  kube-system                 kube-vip-ha-240486                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m57s  kube-proxy       
	  Normal  Starting                 7m3s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m3s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m3s   kubelet          Node ha-240486 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m3s   kubelet          Node ha-240486 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m3s   kubelet          Node ha-240486 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m     node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal  NodeReady                6m43s  kubelet          Node ha-240486 status is now: NodeReady
	  Normal  RegisteredNode           6m5s   node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal  RegisteredNode           4m48s  node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	
	
	Name:               ha-240486-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_15_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:15:14 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:18:07 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 28 Aug 2024 17:17:17 +0000   Wed, 28 Aug 2024 17:18:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 28 Aug 2024 17:17:17 +0000   Wed, 28 Aug 2024 17:18:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 28 Aug 2024 17:17:17 +0000   Wed, 28 Aug 2024 17:18:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 28 Aug 2024 17:17:17 +0000   Wed, 28 Aug 2024 17:18:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-240486-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9be8698d6a9a4f2dbc236b4faf8196d2
	  System UUID:                9be8698d-6a9a-4f2d-bc23-6b4faf8196d2
	  Boot ID:                    d7ccf2dd-2975-4d65-8e82-89ec9777ddfe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5pjcm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 etcd-ha-240486-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m11s
	  kube-system                 kindnet-q9q9q                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m13s
	  kube-system                 kube-apiserver-ha-240486-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-controller-manager-ha-240486-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-proxy-4w7tt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-scheduler-ha-240486-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-vip-ha-240486-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  CIDRAssignmentFailed     6m13s                  cidrAllocator    Node ha-240486-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  6m13s (x8 over 6m13s)  kubelet          Node ha-240486-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s (x8 over 6m13s)  kubelet          Node ha-240486-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s (x7 over 6m13s)  kubelet          Node ha-240486-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           6m5s                   node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  NodeNotReady             2m40s                  node-controller  Node ha-240486-m02 status is now: NodeNotReady
	
	
	Name:               ha-240486-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_16_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:16:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:21:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:17:31 +0000   Wed, 28 Aug 2024 17:16:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:17:31 +0000   Wed, 28 Aug 2024 17:16:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:17:31 +0000   Wed, 28 Aug 2024 17:16:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:17:31 +0000   Wed, 28 Aug 2024 17:16:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.28
	  Hostname:    ha-240486-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b793a5caef8d481e8356b8025697789a
	  System UUID:                b793a5ca-ef8d-481e-8356-b8025697789a
	  Boot ID:                    20c85c11-97db-4e9e-b2a2-d3ce088826f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dtp5b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  kube-system                 etcd-ha-240486-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m54s
	  kube-system                 kindnet-bgr7f                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m57s
	  kube-system                 kube-apiserver-ha-240486-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-controller-manager-ha-240486-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-ktw9z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-scheduler-ha-240486-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-vip-ha-240486-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m52s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     4m57s                  cidrAllocator    Node ha-240486-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  4m57s (x8 over 4m57s)  kubelet          Node ha-240486-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s (x8 over 4m57s)  kubelet          Node ha-240486-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s (x7 over 4m57s)  kubelet          Node ha-240486-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	  Normal  RegisteredNode           4m48s                  node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	
	
	Name:               ha-240486-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_17_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:17:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:21:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:18:05 +0000   Wed, 28 Aug 2024 17:17:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:18:05 +0000   Wed, 28 Aug 2024 17:17:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:18:05 +0000   Wed, 28 Aug 2024 17:17:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:18:05 +0000   Wed, 28 Aug 2024 17:17:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    ha-240486-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2dbc2f47ba234abeb085dbeb264b66eb
	  System UUID:                2dbc2f47-ba23-4abe-b085-dbeb264b66eb
	  Boot ID:                    50d6dfb8-8ac7-4317-a369-7f2a4a221b1a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gngl7       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m53s
	  kube-system                 kube-proxy-jlk49    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  CIDRAssignmentFailed     3m53s                  cidrAllocator    Node ha-240486-m04 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  3m53s (x2 over 3m53s)  kubelet          Node ha-240486-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x2 over 3m53s)  kubelet          Node ha-240486-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x2 over 3m53s)  kubelet          Node ha-240486-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal  NodeReady                3m33s                  kubelet          Node ha-240486-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug28 17:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051317] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039079] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.719096] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.825427] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[Aug28 17:14] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.828086] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.054618] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049350] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.168729] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.141709] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.274713] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.746562] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.326521] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.055344] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.078869] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.096594] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.273661] kauditd_printk_skb: 28 callbacks suppressed
	[ +15.597500] kauditd_printk_skb: 31 callbacks suppressed
	[Aug28 17:15] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594] <==
	{"level":"warn","ts":"2024-08-28T17:21:26.915985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.016288Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.273110Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.284067Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.290752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.297416Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.301104Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.304433Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.311396Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.316230Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.318084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.324445Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.329764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.333009Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.338606Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.344896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.352752Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.357122Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.361011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.365072Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.371529Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.378077Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.415817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.441627Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-28T17:21:27.443355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"bcb2eab2b5d0a9fc","from":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:21:27 up 7 min,  0 users,  load average: 0.48, 0.21, 0.10
	Linux ha-240486 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79] <==
	I0828 17:20:54.033216       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:21:04.041038       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:21:04.041085       1 main.go:299] handling current node
	I0828 17:21:04.041101       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:21:04.041107       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:21:04.041279       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:21:04.041301       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:21:04.041363       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:21:04.041368       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:21:14.036035       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:21:14.036089       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:21:14.036257       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:21:14.036281       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:21:14.036342       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:21:14.036360       1 main.go:299] handling current node
	I0828 17:21:14.036373       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:21:14.036378       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:21:24.036691       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:21:24.036799       1 main.go:299] handling current node
	I0828 17:21:24.036826       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:21:24.036844       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:21:24.037079       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:21:24.037122       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:21:24.037216       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:21:24.037236       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [6c141f787017a9c1c78a3b63460b101bc27bc575301ce774a245737347724883] <==
	I0828 17:14:22.569383       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0828 17:14:22.579988       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.227]
	I0828 17:14:22.581150       1 controller.go:615] quota admission added evaluator for: endpoints
	I0828 17:14:22.587465       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0828 17:14:23.022591       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0828 17:14:24.419442       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0828 17:14:24.434631       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0828 17:14:24.461480       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0828 17:14:28.425072       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0828 17:14:28.522380       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0828 17:17:04.945281       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33858: use of closed network connection
	E0828 17:17:05.130326       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33890: use of closed network connection
	E0828 17:17:05.322972       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33914: use of closed network connection
	E0828 17:17:05.514871       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33946: use of closed network connection
	E0828 17:17:05.689135       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:33964: use of closed network connection
	E0828 17:17:05.881850       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39650: use of closed network connection
	E0828 17:17:06.051533       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39672: use of closed network connection
	E0828 17:17:06.227751       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39704: use of closed network connection
	E0828 17:17:06.414739       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39728: use of closed network connection
	E0828 17:17:06.701391       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39750: use of closed network connection
	E0828 17:17:06.869402       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39766: use of closed network connection
	E0828 17:17:07.047700       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39778: use of closed network connection
	E0828 17:17:07.212717       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39798: use of closed network connection
	E0828 17:17:07.387973       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39816: use of closed network connection
	E0828 17:17:07.563040       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:39834: use of closed network connection
	
	
	==> kube-controller-manager [594ab811e29b5ad78a8c0a590f754540129775e0b65bf3fb8ac3d05808cf6dbe] <==
	I0828 17:17:34.685653       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-240486-m04" podCIDRs=["10.244.4.0/24"]
	I0828 17:17:34.685741       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:34.685774       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:34.698367       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:34.960838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:35.376370       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:37.302699       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:37.952212       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-240486-m04"
	I0828 17:17:37.952796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:38.060437       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:39.395052       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:39.511778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:44.787421       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:54.478777       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-240486-m04"
	I0828 17:17:54.481090       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:54.496818       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:17:57.246174       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:18:05.084745       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:18:47.977778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m02"
	I0828 17:18:47.978358       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-240486-m04"
	I0828 17:18:48.009732       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m02"
	I0828 17:18:48.047911       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.548886ms"
	I0828 17:18:48.048427       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="40.961µs"
	I0828 17:18:49.445214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m02"
	I0828 17:18:53.156392       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m02"
	
	
	==> kube-proxy [5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 17:14:29.691987       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 17:14:29.704718       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0828 17:14:29.704803       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 17:14:29.770515       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 17:14:29.770605       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 17:14:29.770636       1 server_linux.go:169] "Using iptables Proxier"
	I0828 17:14:29.772841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 17:14:29.773155       1 server.go:483] "Version info" version="v1.31.0"
	I0828 17:14:29.773186       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:14:29.774614       1 config.go:197] "Starting service config controller"
	I0828 17:14:29.774660       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 17:14:29.774714       1 config.go:104] "Starting endpoint slice config controller"
	I0828 17:14:29.774731       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 17:14:29.775271       1 config.go:326] "Starting node config controller"
	I0828 17:14:29.775300       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 17:14:29.875056       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 17:14:29.875142       1 shared_informer.go:320] Caches are synced for service config
	I0828 17:14:29.875473       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096] <==
	W0828 17:14:21.915511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 17:14:21.915596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:14:22.067534       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0828 17:14:22.068774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:14:22.116277       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 17:14:22.116446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 17:14:22.120371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0828 17:14:22.120551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:14:22.154213       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 17:14:22.154396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:14:22.444089       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 17:14:22.444262       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0828 17:14:25.491027       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0828 17:16:59.369035       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-dtp5b\": pod busybox-7dff88458-dtp5b is already assigned to node \"ha-240486-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-dtp5b" node="ha-240486-m02"
	E0828 17:16:59.374021       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-dtp5b\": pod busybox-7dff88458-dtp5b is already assigned to node \"ha-240486-m03\"" pod="default/busybox-7dff88458-dtp5b"
	I0828 17:16:59.390834       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="b03278c7-1983-4812-bb23-509106ace2c2" pod="default/busybox-7dff88458-5pjcm" assumedNode="ha-240486-m02" currentNode="ha-240486-m03"
	I0828 17:16:59.407998       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="e4608982-afdd-491b-8fdb-ede6a6a4167a" pod="default/busybox-7dff88458-tnmmz" assumedNode="ha-240486" currentNode="ha-240486-m02"
	E0828 17:16:59.417678       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5pjcm\": pod busybox-7dff88458-5pjcm is already assigned to node \"ha-240486-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-5pjcm" node="ha-240486-m03"
	E0828 17:16:59.424827       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod b03278c7-1983-4812-bb23-509106ace2c2(default/busybox-7dff88458-5pjcm) was assumed on ha-240486-m03 but assigned to ha-240486-m02" pod="default/busybox-7dff88458-5pjcm"
	E0828 17:16:59.428003       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5pjcm\": pod busybox-7dff88458-5pjcm is already assigned to node \"ha-240486-m02\"" pod="default/busybox-7dff88458-5pjcm"
	I0828 17:16:59.428093       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5pjcm" node="ha-240486-m02"
	E0828 17:16:59.424465       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-tnmmz\": pod busybox-7dff88458-tnmmz is already assigned to node \"ha-240486\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-tnmmz" node="ha-240486-m02"
	E0828 17:16:59.428536       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e4608982-afdd-491b-8fdb-ede6a6a4167a(default/busybox-7dff88458-tnmmz) was assumed on ha-240486-m02 but assigned to ha-240486" pod="default/busybox-7dff88458-tnmmz"
	E0828 17:16:59.428571       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-tnmmz\": pod busybox-7dff88458-tnmmz is already assigned to node \"ha-240486\"" pod="default/busybox-7dff88458-tnmmz"
	I0828 17:16:59.428617       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-tnmmz" node="ha-240486"
	
	
	==> kubelet <==
	Aug 28 17:20:14 ha-240486 kubelet[1308]: E0828 17:20:14.481585    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865614481070591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:24 ha-240486 kubelet[1308]: E0828 17:20:24.386310    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 17:20:24 ha-240486 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 17:20:24 ha-240486 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 17:20:24 ha-240486 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:20:24 ha-240486 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:20:24 ha-240486 kubelet[1308]: E0828 17:20:24.483299    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865624483053307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:24 ha-240486 kubelet[1308]: E0828 17:20:24.483337    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865624483053307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:34 ha-240486 kubelet[1308]: E0828 17:20:34.484872    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865634484626045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:34 ha-240486 kubelet[1308]: E0828 17:20:34.484970    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865634484626045,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:44 ha-240486 kubelet[1308]: E0828 17:20:44.487580    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865644486806421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:44 ha-240486 kubelet[1308]: E0828 17:20:44.487602    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865644486806421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:54 ha-240486 kubelet[1308]: E0828 17:20:54.490422    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865654489754806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:20:54 ha-240486 kubelet[1308]: E0828 17:20:54.490505    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865654489754806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:21:04 ha-240486 kubelet[1308]: E0828 17:21:04.491713    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865664491404181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:21:04 ha-240486 kubelet[1308]: E0828 17:21:04.492089    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865664491404181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:21:14 ha-240486 kubelet[1308]: E0828 17:21:14.493651    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865674493015983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:21:14 ha-240486 kubelet[1308]: E0828 17:21:14.493688    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865674493015983,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:21:24 ha-240486 kubelet[1308]: E0828 17:21:24.385954    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 17:21:24 ha-240486 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 17:21:24 ha-240486 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 17:21:24 ha-240486 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:21:24 ha-240486 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:21:24 ha-240486 kubelet[1308]: E0828 17:21:24.495286    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865684494781031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:21:24 ha-240486 kubelet[1308]: E0828 17:21:24.495337    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724865684494781031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-240486 -n ha-240486
helpers_test.go:261: (dbg) Run:  kubectl --context ha-240486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (406.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-240486 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-240486 -v=7 --alsologtostderr
E0828 17:23:00.240229   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:23:27.942265   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-240486 -v=7 --alsologtostderr: exit status 82 (2m1.790729417s)

                                                
                                                
-- stdout --
	* Stopping node "ha-240486-m04"  ...
	* Stopping node "ha-240486-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:21:28.822529   35320 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:21:28.822626   35320 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:21:28.822633   35320 out.go:358] Setting ErrFile to fd 2...
	I0828 17:21:28.822638   35320 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:21:28.822805   35320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:21:28.823038   35320 out.go:352] Setting JSON to false
	I0828 17:21:28.823116   35320 mustload.go:65] Loading cluster: ha-240486
	I0828 17:21:28.823459   35320 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:21:28.823539   35320 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:21:28.823711   35320 mustload.go:65] Loading cluster: ha-240486
	I0828 17:21:28.823833   35320 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:21:28.823863   35320 stop.go:39] StopHost: ha-240486-m04
	I0828 17:21:28.824206   35320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:28.824248   35320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:28.839618   35320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I0828 17:21:28.840051   35320 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:28.840552   35320 main.go:141] libmachine: Using API Version  1
	I0828 17:21:28.840570   35320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:28.840926   35320 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:28.843312   35320 out.go:177] * Stopping node "ha-240486-m04"  ...
	I0828 17:21:28.844394   35320 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0828 17:21:28.844437   35320 main.go:141] libmachine: (ha-240486-m04) Calling .DriverName
	I0828 17:21:28.844666   35320 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0828 17:21:28.844687   35320 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHHostname
	I0828 17:21:28.847872   35320 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:28.848346   35320 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:17:22 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:21:28.848384   35320 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:21:28.848581   35320 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHPort
	I0828 17:21:28.848765   35320 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHKeyPath
	I0828 17:21:28.848990   35320 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHUsername
	I0828 17:21:28.849176   35320 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m04/id_rsa Username:docker}
	I0828 17:21:28.932821   35320 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0828 17:21:28.985427   35320 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0828 17:21:29.038101   35320 main.go:141] libmachine: Stopping "ha-240486-m04"...
	I0828 17:21:29.038145   35320 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:21:29.039642   35320 main.go:141] libmachine: (ha-240486-m04) Calling .Stop
	I0828 17:21:29.043207   35320 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 0/120
	I0828 17:21:30.155480   35320 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:21:30.156919   35320 main.go:141] libmachine: Machine "ha-240486-m04" was stopped.
	I0828 17:21:30.156936   35320 stop.go:75] duration metric: took 1.312545883s to stop
	I0828 17:21:30.156957   35320 stop.go:39] StopHost: ha-240486-m03
	I0828 17:21:30.157256   35320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:21:30.157291   35320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:21:30.172008   35320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0828 17:21:30.172396   35320 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:21:30.172853   35320 main.go:141] libmachine: Using API Version  1
	I0828 17:21:30.172873   35320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:21:30.173173   35320 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:21:30.175210   35320 out.go:177] * Stopping node "ha-240486-m03"  ...
	I0828 17:21:30.176287   35320 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0828 17:21:30.176310   35320 main.go:141] libmachine: (ha-240486-m03) Calling .DriverName
	I0828 17:21:30.176503   35320 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0828 17:21:30.176523   35320 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHHostname
	I0828 17:21:30.179308   35320 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:30.179786   35320 main.go:141] libmachine: (ha-240486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:b2:44", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:15:53 +0000 UTC Type:0 Mac:52:54:00:2e:b2:44 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:ha-240486-m03 Clientid:01:52:54:00:2e:b2:44}
	I0828 17:21:30.179816   35320 main.go:141] libmachine: (ha-240486-m03) DBG | domain ha-240486-m03 has defined IP address 192.168.39.28 and MAC address 52:54:00:2e:b2:44 in network mk-ha-240486
	I0828 17:21:30.179946   35320 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHPort
	I0828 17:21:30.180130   35320 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHKeyPath
	I0828 17:21:30.180268   35320 main.go:141] libmachine: (ha-240486-m03) Calling .GetSSHUsername
	I0828 17:21:30.180417   35320 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m03/id_rsa Username:docker}
	I0828 17:21:30.265503   35320 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0828 17:21:30.320539   35320 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0828 17:21:30.377873   35320 main.go:141] libmachine: Stopping "ha-240486-m03"...
	I0828 17:21:30.377906   35320 main.go:141] libmachine: (ha-240486-m03) Calling .GetState
	I0828 17:21:30.379703   35320 main.go:141] libmachine: (ha-240486-m03) Calling .Stop
	I0828 17:21:30.382863   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 0/120
	I0828 17:21:31.384161   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 1/120
	I0828 17:21:32.385562   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 2/120
	I0828 17:21:33.387587   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 3/120
	I0828 17:21:34.389968   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 4/120
	I0828 17:21:35.391339   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 5/120
	I0828 17:21:36.393008   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 6/120
	I0828 17:21:37.394680   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 7/120
	I0828 17:21:38.396834   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 8/120
	I0828 17:21:39.398134   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 9/120
	I0828 17:21:40.400176   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 10/120
	I0828 17:21:41.401659   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 11/120
	I0828 17:21:42.403117   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 12/120
	I0828 17:21:43.404688   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 13/120
	I0828 17:21:44.406125   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 14/120
	I0828 17:21:45.408095   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 15/120
	I0828 17:21:46.409620   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 16/120
	I0828 17:21:47.410947   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 17/120
	I0828 17:21:48.412242   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 18/120
	I0828 17:21:49.413502   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 19/120
	I0828 17:21:50.415612   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 20/120
	I0828 17:21:51.416965   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 21/120
	I0828 17:21:52.418161   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 22/120
	I0828 17:21:53.419488   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 23/120
	I0828 17:21:54.420790   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 24/120
	I0828 17:21:55.422481   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 25/120
	I0828 17:21:56.423860   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 26/120
	I0828 17:21:57.425383   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 27/120
	I0828 17:21:58.427002   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 28/120
	I0828 17:21:59.428559   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 29/120
	I0828 17:22:00.430456   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 30/120
	I0828 17:22:01.432008   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 31/120
	I0828 17:22:02.433291   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 32/120
	I0828 17:22:03.434651   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 33/120
	I0828 17:22:04.436497   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 34/120
	I0828 17:22:05.438119   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 35/120
	I0828 17:22:06.439397   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 36/120
	I0828 17:22:07.440652   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 37/120
	I0828 17:22:08.441972   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 38/120
	I0828 17:22:09.443247   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 39/120
	I0828 17:22:10.444884   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 40/120
	I0828 17:22:11.446251   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 41/120
	I0828 17:22:12.447716   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 42/120
	I0828 17:22:13.449048   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 43/120
	I0828 17:22:14.450334   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 44/120
	I0828 17:22:15.451848   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 45/120
	I0828 17:22:16.453445   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 46/120
	I0828 17:22:17.455544   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 47/120
	I0828 17:22:18.456870   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 48/120
	I0828 17:22:19.458154   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 49/120
	I0828 17:22:20.459841   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 50/120
	I0828 17:22:21.461352   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 51/120
	I0828 17:22:22.462936   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 52/120
	I0828 17:22:23.464292   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 53/120
	I0828 17:22:24.465586   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 54/120
	I0828 17:22:25.467374   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 55/120
	I0828 17:22:26.468846   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 56/120
	I0828 17:22:27.470400   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 57/120
	I0828 17:22:28.472673   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 58/120
	I0828 17:22:29.474108   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 59/120
	I0828 17:22:30.475790   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 60/120
	I0828 17:22:31.477270   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 61/120
	I0828 17:22:32.478620   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 62/120
	I0828 17:22:33.480109   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 63/120
	I0828 17:22:34.481419   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 64/120
	I0828 17:22:35.483309   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 65/120
	I0828 17:22:36.485246   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 66/120
	I0828 17:22:37.486607   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 67/120
	I0828 17:22:38.488695   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 68/120
	I0828 17:22:39.490016   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 69/120
	I0828 17:22:40.491641   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 70/120
	I0828 17:22:41.492930   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 71/120
	I0828 17:22:42.494593   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 72/120
	I0828 17:22:43.496602   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 73/120
	I0828 17:22:44.497918   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 74/120
	I0828 17:22:45.500273   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 75/120
	I0828 17:22:46.501554   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 76/120
	I0828 17:22:47.502887   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 77/120
	I0828 17:22:48.504173   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 78/120
	I0828 17:22:49.505562   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 79/120
	I0828 17:22:50.507277   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 80/120
	I0828 17:22:51.508845   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 81/120
	I0828 17:22:52.510405   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 82/120
	I0828 17:22:53.511659   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 83/120
	I0828 17:22:54.512885   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 84/120
	I0828 17:22:55.514620   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 85/120
	I0828 17:22:56.516270   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 86/120
	I0828 17:22:57.517529   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 87/120
	I0828 17:22:58.518759   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 88/120
	I0828 17:22:59.520023   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 89/120
	I0828 17:23:00.521407   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 90/120
	I0828 17:23:01.522751   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 91/120
	I0828 17:23:02.524126   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 92/120
	I0828 17:23:03.525931   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 93/120
	I0828 17:23:04.527228   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 94/120
	I0828 17:23:05.528813   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 95/120
	I0828 17:23:06.530210   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 96/120
	I0828 17:23:07.531467   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 97/120
	I0828 17:23:08.532888   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 98/120
	I0828 17:23:09.534252   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 99/120
	I0828 17:23:10.535863   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 100/120
	I0828 17:23:11.537230   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 101/120
	I0828 17:23:12.538772   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 102/120
	I0828 17:23:13.540265   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 103/120
	I0828 17:23:14.541490   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 104/120
	I0828 17:23:15.543335   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 105/120
	I0828 17:23:16.544672   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 106/120
	I0828 17:23:17.545995   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 107/120
	I0828 17:23:18.547473   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 108/120
	I0828 17:23:19.548715   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 109/120
	I0828 17:23:20.550537   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 110/120
	I0828 17:23:21.551888   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 111/120
	I0828 17:23:22.553527   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 112/120
	I0828 17:23:23.554862   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 113/120
	I0828 17:23:24.556674   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 114/120
	I0828 17:23:25.558293   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 115/120
	I0828 17:23:26.559460   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 116/120
	I0828 17:23:27.560760   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 117/120
	I0828 17:23:28.562062   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 118/120
	I0828 17:23:29.563974   35320 main.go:141] libmachine: (ha-240486-m03) Waiting for machine to stop 119/120
	I0828 17:23:30.564614   35320 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0828 17:23:30.564684   35320 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0828 17:23:30.566273   35320 out.go:201] 
	W0828 17:23:30.567458   35320 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0828 17:23:30.567485   35320 out.go:270] * 
	* 
	W0828 17:23:30.569852   35320 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 17:23:30.571395   35320 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-240486 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-240486 --wait=true -v=7 --alsologtostderr
E0828 17:24:23.525849   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:28:00.240203   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-240486 --wait=true -v=7 --alsologtostderr: (4m41.490931579s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-240486
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-240486 -n ha-240486
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-240486 logs -n 25: (2.073574491s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m02:/home/docker/cp-test_ha-240486-m03_ha-240486-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m02 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m03_ha-240486-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04:/home/docker/cp-test_ha-240486-m03_ha-240486-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m04 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m03_ha-240486-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp testdata/cp-test.txt                                                | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3516631358/001/cp-test_ha-240486-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486:/home/docker/cp-test_ha-240486-m04_ha-240486.txt                       |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486 sudo cat                                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486.txt                                 |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m02:/home/docker/cp-test_ha-240486-m04_ha-240486-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m02 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03:/home/docker/cp-test_ha-240486-m04_ha-240486-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m03 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-240486 node stop m02 -v=7                                                     | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-240486 node start m02 -v=7                                                    | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-240486 -v=7                                                           | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-240486 -v=7                                                                | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-240486 --wait=true -v=7                                                    | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:23 UTC | 28 Aug 24 17:28 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-240486                                                                | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:28 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 17:23:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 17:23:30.615836   35789 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:23:30.615952   35789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:23:30.615961   35789 out.go:358] Setting ErrFile to fd 2...
	I0828 17:23:30.615965   35789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:23:30.616146   35789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:23:30.616698   35789 out.go:352] Setting JSON to false
	I0828 17:23:30.617654   35789 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3957,"bootTime":1724861854,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 17:23:30.617709   35789 start.go:139] virtualization: kvm guest
	I0828 17:23:30.619933   35789 out.go:177] * [ha-240486] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 17:23:30.621177   35789 notify.go:220] Checking for updates...
	I0828 17:23:30.621211   35789 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:23:30.622540   35789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:23:30.623980   35789 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:23:30.625233   35789 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:23:30.626281   35789 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 17:23:30.627368   35789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:23:30.628809   35789 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:23:30.628886   35789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:23:30.629288   35789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:23:30.629340   35789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:23:30.644356   35789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0828 17:23:30.644750   35789 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:23:30.645232   35789 main.go:141] libmachine: Using API Version  1
	I0828 17:23:30.645250   35789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:23:30.645608   35789 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:23:30.645775   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:23:30.680485   35789 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 17:23:30.681636   35789 start.go:297] selected driver: kvm2
	I0828 17:23:30.681656   35789 start.go:901] validating driver "kvm2" against &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:23:30.681801   35789 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:23:30.682131   35789 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:23:30.682195   35789 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 17:23:30.697068   35789 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 17:23:30.697970   35789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:23:30.698035   35789 cni.go:84] Creating CNI manager for ""
	I0828 17:23:30.698046   35789 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0828 17:23:30.698140   35789 start.go:340] cluster config:
	{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:23:30.698263   35789 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:23:30.699950   35789 out.go:177] * Starting "ha-240486" primary control-plane node in "ha-240486" cluster
	I0828 17:23:30.701019   35789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:23:30.701050   35789 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 17:23:30.701059   35789 cache.go:56] Caching tarball of preloaded images
	I0828 17:23:30.701136   35789 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 17:23:30.701148   35789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 17:23:30.701289   35789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:23:30.701523   35789 start.go:360] acquireMachinesLock for ha-240486: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 17:23:30.701572   35789 start.go:364] duration metric: took 22.404µs to acquireMachinesLock for "ha-240486"
	I0828 17:23:30.701586   35789 start.go:96] Skipping create...Using existing machine configuration
	I0828 17:23:30.701596   35789 fix.go:54] fixHost starting: 
	I0828 17:23:30.701838   35789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:23:30.701869   35789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:23:30.716122   35789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35609
	I0828 17:23:30.716502   35789 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:23:30.716942   35789 main.go:141] libmachine: Using API Version  1
	I0828 17:23:30.716960   35789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:23:30.717265   35789 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:23:30.717443   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:23:30.717620   35789 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:23:30.719219   35789 fix.go:112] recreateIfNeeded on ha-240486: state=Running err=<nil>
	W0828 17:23:30.719252   35789 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 17:23:30.721157   35789 out.go:177] * Updating the running kvm2 "ha-240486" VM ...
	I0828 17:23:30.722478   35789 machine.go:93] provisionDockerMachine start ...
	I0828 17:23:30.722500   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:23:30.722694   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:30.725260   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.725686   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:30.725704   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.725862   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:23:30.726011   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.726187   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.726297   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:23:30.726465   35789 main.go:141] libmachine: Using SSH client type: native
	I0828 17:23:30.726650   35789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:23:30.726662   35789 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 17:23:30.834878   35789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-240486
	
	I0828 17:23:30.834903   35789 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:23:30.835126   35789 buildroot.go:166] provisioning hostname "ha-240486"
	I0828 17:23:30.835152   35789 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:23:30.835389   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:30.837891   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.838284   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:30.838311   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.838404   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:23:30.838568   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.838730   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.838886   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:23:30.839022   35789 main.go:141] libmachine: Using SSH client type: native
	I0828 17:23:30.839189   35789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:23:30.839200   35789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-240486 && echo "ha-240486" | sudo tee /etc/hostname
	I0828 17:23:30.962253   35789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-240486
	
	I0828 17:23:30.962275   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:30.965694   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.966128   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:30.966153   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.966370   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:23:30.966551   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.966724   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.966870   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:23:30.967030   35789 main.go:141] libmachine: Using SSH client type: native
	I0828 17:23:30.967194   35789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:23:30.967208   35789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-240486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-240486/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-240486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:23:31.074524   35789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:23:31.074552   35789 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 17:23:31.074580   35789 buildroot.go:174] setting up certificates
	I0828 17:23:31.074589   35789 provision.go:84] configureAuth start
	I0828 17:23:31.074596   35789 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:23:31.074880   35789 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:23:31.077723   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.078119   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:31.078140   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.078255   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:31.080489   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.080800   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:31.080825   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.080936   35789 provision.go:143] copyHostCerts
	I0828 17:23:31.080980   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:23:31.081014   35789 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 17:23:31.081028   35789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:23:31.081100   35789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 17:23:31.081180   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:23:31.081197   35789 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 17:23:31.081204   35789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:23:31.081237   35789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 17:23:31.081275   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:23:31.081291   35789 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 17:23:31.081297   35789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:23:31.081321   35789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 17:23:31.081379   35789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.ha-240486 san=[127.0.0.1 192.168.39.227 ha-240486 localhost minikube]
	I0828 17:23:31.146597   35789 provision.go:177] copyRemoteCerts
	I0828 17:23:31.146677   35789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:23:31.146709   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:31.149336   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.149690   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:31.149721   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.149840   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:23:31.149987   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:31.150129   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:23:31.150274   35789 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:23:31.233214   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0828 17:23:31.233335   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 17:23:31.263238   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0828 17:23:31.263311   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0828 17:23:31.293351   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0828 17:23:31.293424   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 17:23:31.324577   35789 provision.go:87] duration metric: took 249.97554ms to configureAuth
	I0828 17:23:31.324608   35789 buildroot.go:189] setting minikube options for container-runtime
	I0828 17:23:31.324862   35789 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:23:31.324947   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:31.327433   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.327839   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:31.327865   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.328049   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:23:31.328213   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:31.328389   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:31.328591   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:23:31.328790   35789 main.go:141] libmachine: Using SSH client type: native
	I0828 17:23:31.329001   35789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:23:31.329016   35789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 17:25:02.231932   35789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 17:25:02.231958   35789 machine.go:96] duration metric: took 1m31.509466053s to provisionDockerMachine
	I0828 17:25:02.231973   35789 start.go:293] postStartSetup for "ha-240486" (driver="kvm2")
	I0828 17:25:02.231986   35789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:25:02.232005   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.232340   35789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:25:02.232364   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:25:02.235426   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.235956   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.235987   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.236179   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:25:02.236359   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.236538   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:25:02.236705   35789 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:25:02.321951   35789 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:25:02.326490   35789 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 17:25:02.326514   35789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 17:25:02.326593   35789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 17:25:02.326704   35789 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 17:25:02.326715   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /etc/ssl/certs/175282.pem
	I0828 17:25:02.326821   35789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 17:25:02.336171   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:25:02.360269   35789 start.go:296] duration metric: took 128.28075ms for postStartSetup
	I0828 17:25:02.360315   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.360596   35789 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0828 17:25:02.360623   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:25:02.362984   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.363404   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.363427   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.363568   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:25:02.363741   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.363912   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:25:02.364008   35789 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	W0828 17:25:02.448084   35789 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0828 17:25:02.448125   35789 fix.go:56] duration metric: took 1m31.746521442s for fixHost
	I0828 17:25:02.448148   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:25:02.450857   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.451363   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.451396   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.451532   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:25:02.451726   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.451895   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.452019   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:25:02.452167   35789 main.go:141] libmachine: Using SSH client type: native
	I0828 17:25:02.452344   35789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:25:02.452359   35789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 17:25:02.562821   35789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724865902.517036600
	
	I0828 17:25:02.562844   35789 fix.go:216] guest clock: 1724865902.517036600
	I0828 17:25:02.562851   35789 fix.go:229] Guest: 2024-08-28 17:25:02.5170366 +0000 UTC Remote: 2024-08-28 17:25:02.44813333 +0000 UTC m=+91.867665805 (delta=68.90327ms)
	I0828 17:25:02.562881   35789 fix.go:200] guest clock delta is within tolerance: 68.90327ms
	I0828 17:25:02.562886   35789 start.go:83] releasing machines lock for "ha-240486", held for 1m31.861305007s
	I0828 17:25:02.562904   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.563160   35789 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:25:02.565485   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.565824   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.565853   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.565971   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.566457   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.566621   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.566729   35789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:25:02.566768   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:25:02.566853   35789 ssh_runner.go:195] Run: cat /version.json
	I0828 17:25:02.566870   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:25:02.569336   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.569624   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.569666   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.569683   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.569805   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:25:02.569994   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.570191   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:25:02.570283   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.570305   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.570433   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:25:02.570526   35789 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:25:02.570598   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.570726   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:25:02.570848   35789 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:25:02.647267   35789 ssh_runner.go:195] Run: systemctl --version
	I0828 17:25:02.690791   35789 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 17:25:02.850816   35789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 17:25:02.859352   35789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 17:25:02.859422   35789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:25:02.868271   35789 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0828 17:25:02.868297   35789 start.go:495] detecting cgroup driver to use...
	I0828 17:25:02.868360   35789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 17:25:02.883643   35789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 17:25:02.897550   35789 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:25:02.897612   35789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:25:02.911123   35789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:25:02.925312   35789 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:25:03.077036   35789 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:25:03.231825   35789 docker.go:233] disabling docker service ...
	I0828 17:25:03.231898   35789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:25:03.251836   35789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:25:03.266267   35789 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:25:03.410278   35789 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:25:03.553159   35789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:25:03.566904   35789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:25:03.584608   35789 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 17:25:03.584660   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.594989   35789 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 17:25:03.595048   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.605222   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.615401   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.625770   35789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:25:03.636199   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.646418   35789 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.656748   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.667156   35789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:25:03.676961   35789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:25:03.718259   35789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:25:03.990554   35789 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 17:25:04.288293   35789 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 17:25:04.288360   35789 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 17:25:04.292986   35789 start.go:563] Will wait 60s for crictl version
	I0828 17:25:04.293045   35789 ssh_runner.go:195] Run: which crictl
	I0828 17:25:04.296567   35789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:25:04.336758   35789 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 17:25:04.336829   35789 ssh_runner.go:195] Run: crio --version
	I0828 17:25:04.365260   35789 ssh_runner.go:195] Run: crio --version
	I0828 17:25:04.397712   35789 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 17:25:04.399003   35789 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:25:04.401568   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:04.401832   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:04.401857   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:04.402016   35789 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 17:25:04.406801   35789 kubeadm.go:883] updating cluster {Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 17:25:04.407085   35789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:25:04.407150   35789 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:25:04.451120   35789 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 17:25:04.451148   35789 crio.go:433] Images already preloaded, skipping extraction
	I0828 17:25:04.451205   35789 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:25:04.483463   35789 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 17:25:04.483491   35789 cache_images.go:84] Images are preloaded, skipping loading
	I0828 17:25:04.483503   35789 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0 crio true true} ...
	I0828 17:25:04.483620   35789 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-240486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:25:04.483700   35789 ssh_runner.go:195] Run: crio config
	I0828 17:25:04.531875   35789 cni.go:84] Creating CNI manager for ""
	I0828 17:25:04.531896   35789 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0828 17:25:04.531904   35789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 17:25:04.531928   35789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-240486 NodeName:ha-240486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 17:25:04.532089   35789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-240486"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 17:25:04.532119   35789 kube-vip.go:115] generating kube-vip config ...
	I0828 17:25:04.532173   35789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0828 17:25:04.543263   35789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0828 17:25:04.543400   35789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0828 17:25:04.543465   35789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 17:25:04.552303   35789 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 17:25:04.552389   35789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0828 17:25:04.561212   35789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0828 17:25:04.580297   35789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:25:04.595719   35789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0828 17:25:04.610817   35789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0828 17:25:04.627827   35789 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0828 17:25:04.631559   35789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:25:04.773081   35789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:25:04.786360   35789 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486 for IP: 192.168.39.227
	I0828 17:25:04.786382   35789 certs.go:194] generating shared ca certs ...
	I0828 17:25:04.786397   35789 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:25:04.786527   35789 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 17:25:04.786571   35789 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 17:25:04.786579   35789 certs.go:256] generating profile certs ...
	I0828 17:25:04.786655   35789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key
	I0828 17:25:04.786680   35789 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.731f7cec
	I0828 17:25:04.786693   35789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.731f7cec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.103 192.168.39.28 192.168.39.254]
	I0828 17:25:05.048941   35789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.731f7cec ...
	I0828 17:25:05.048972   35789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.731f7cec: {Name:mk861fecc78047e15c79214d24f5e8155355b432 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:25:05.049133   35789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.731f7cec ...
	I0828 17:25:05.049143   35789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.731f7cec: {Name:mk8bd5a26c1a54101a89c3b0564624de3c5322d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:25:05.049211   35789 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.731f7cec -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt
	I0828 17:25:05.049376   35789 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.731f7cec -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key
	I0828 17:25:05.049520   35789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key
	I0828 17:25:05.049535   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0828 17:25:05.049549   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0828 17:25:05.049563   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0828 17:25:05.049605   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0828 17:25:05.049625   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0828 17:25:05.049636   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0828 17:25:05.049652   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0828 17:25:05.049674   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0828 17:25:05.049719   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 17:25:05.049745   35789 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 17:25:05.049753   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:25:05.049773   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 17:25:05.049795   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:25:05.049814   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 17:25:05.049848   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:25:05.049875   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:25:05.049889   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem -> /usr/share/ca-certificates/17528.pem
	I0828 17:25:05.049901   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /usr/share/ca-certificates/175282.pem
	I0828 17:25:05.050507   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:25:05.074438   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 17:25:05.096374   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:25:05.118267   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:25:05.141006   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0828 17:25:05.162140   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 17:25:05.183370   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:25:05.205628   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 17:25:05.227312   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:25:05.250216   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 17:25:05.271912   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 17:25:05.294254   35789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 17:25:05.309339   35789 ssh_runner.go:195] Run: openssl version
	I0828 17:25:05.314817   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:25:05.324994   35789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:25:05.329186   35789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:25:05.329252   35789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:25:05.334714   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:25:05.343753   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 17:25:05.353967   35789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 17:25:05.358695   35789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:25:05.358751   35789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 17:25:05.364355   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 17:25:05.373901   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 17:25:05.384328   35789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 17:25:05.388682   35789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:25:05.388731   35789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 17:25:05.394136   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 17:25:05.403336   35789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:25:05.407754   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 17:25:05.413381   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 17:25:05.418864   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 17:25:05.424204   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 17:25:05.429757   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 17:25:05.434929   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 17:25:05.440115   35789 kubeadm.go:392] StartCluster: {Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:25:05.440240   35789 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 17:25:05.440294   35789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 17:25:05.475047   35789 cri.go:89] found id: "dce20397a3c454526e6cd3309071f31f943894f7e9043c84c8dd24be076b4e86"
	I0828 17:25:05.475074   35789 cri.go:89] found id: "03b8618147a9f8fe0ed74b3064a117f5a3fddbf3c0439c61314f657416e2c4ca"
	I0828 17:25:05.475080   35789 cri.go:89] found id: "0f5b811659f6edeb6d1f6de19fecaecc7791089d8c22cbfd3d3bfc30be215626"
	I0828 17:25:05.475084   35789 cri.go:89] found id: "fd86c846060e5e6db0a04c43e159d479fba1953aa54543ccfc94c815b790873e"
	I0828 17:25:05.475091   35789 cri.go:89] found id: "687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342"
	I0828 17:25:05.475096   35789 cri.go:89] found id: "5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc"
	I0828 17:25:05.475100   35789 cri.go:89] found id: "a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79"
	I0828 17:25:05.475104   35789 cri.go:89] found id: "5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd"
	I0828 17:25:05.475108   35789 cri.go:89] found id: "e264b3c2fcf6e9bcb36188bf8220e3a34460fe9740c6d4df332c937aa3d73846"
	I0828 17:25:05.475115   35789 cri.go:89] found id: "1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096"
	I0828 17:25:05.475133   35789 cri.go:89] found id: "6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594"
	I0828 17:25:05.475139   35789 cri.go:89] found id: "594ab811e29b5ad78a8c0a590f754540129775e0b65bf3fb8ac3d05808cf6dbe"
	I0828 17:25:05.475143   35789 cri.go:89] found id: "6c141f787017a9c1c78a3b63460b101bc27bc575301ce774a245737347724883"
	I0828 17:25:05.475147   35789 cri.go:89] found id: ""
	I0828 17:25:05.475202   35789 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.784495752Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866092784471056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40e1501a-8ec0-4538-9b1f-ac296255e865 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.785216473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95e8d58c-077a-4f24-a568-4889af65f1ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.785274375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95e8d58c-077a-4f24-a568-4889af65f1ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.785824242Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:125af17f499512876ffc699c59e1c1e0532c93afcb1ea2c27b2e1517888f09fc,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724865977380552519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf299429f94cbaecc524ccef007c1684afa3d413c96ee350d5b7b7a7564ae6,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865948378600787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60967cc1348fa22f08c1c7531783c9ab4d3fce1260f6f98bafc9bc3a575778c2,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865943373196389,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410b3c4ea3db4339584cd4eb82d730e9e6e6d49e4e376892b22d470aa6e2076,PodSandboxId:bfd5082b0ae05c270a6f8e67f3a00a3d542697ab92c603e7803aa43927613784,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865940686214195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34999fd725a1df8ba5ccd23b0509fe69d01404843249e6fdde331b7f6db0bdf4,PodSandboxId:6582087fafb9ef2c16950f38ff11bb98283e0e074ca6bedc34eb356bcfb23cdd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724865920645557636,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71813f46ff394974f25f6692688dea8c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2aca740592a4a49eb8bb442f001c1d456905053bf247f1edf977f32b25e433,PodSandboxId:bba2e49f096e9bb4135c922b36606c2c1a2c3cd5f14bb526d65ddcfb5e76ebc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724865907571011627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3321ff37258a7c3207ea0532c2614cab4523863990fda035e34b65be3cc5beee,PodSandboxId:33cba4b4c7b39170f08eec8425a5c617ef5a2a9e0df80456d3e19635ac271aeb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724865907557400576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:a672724aec167d527a6f9bdb4cebfb4f860cba338ca1ae57a114f2b14b5f6ce0,PodSandboxId:ca10600dff018bac2cb9158c6bea69620766794d0fc918c255d092c0015526f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907607695574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe13582c483763cedf27ce6bb1c1ac3af981235b1a300df8e4103c77681267f,PodSandboxId:fe66166e98fcd147844afc97babb88eaf39edf5768a68a46cc2672e4b297e13a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865907392322859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083c1edf6582c4c38c688224f753b28df8557830f500994b577421a7b9bc5e50,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724865907498672676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53addc0306f8956677be2709efd18c12e46a40768c54f02b43e8df3a5a1370a5,PodSandboxId:1f8473f5c912ec84c3a700d48c08906bf151956e76518f674af1457c2417e13d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907373862306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b34b34a42087fc70cd5c8a95ec9171ecf77b41a219483cd24e17b7c48484461,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724865907322644056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092b3fd67ccf50f549e9fdc5831e230ef405283936254cd4f930aed6a8da6889,PodSandboxId:df5ad7c301f159540a7d6c0241adcbdb843a6484963339b9dfaf12a812a457ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865907268385382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109d
f393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95e4e712d6d3ea37be32d3acca6726f64204d87ebf9a1d92514340845294696,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724865907096414405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724865423382656497,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285217770338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285212512899,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724865273106340485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724865269335973269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724865258185225034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724865258176819966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=95e8d58c-077a-4f24-a568-4889af65f1ce name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.838211209Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ced88c6f-821c-49a7-a10d-6fc709bb029c name=/runtime.v1.RuntimeService/Version
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.838334550Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ced88c6f-821c-49a7-a10d-6fc709bb029c name=/runtime.v1.RuntimeService/Version
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.839700449Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79ae3c54-03b9-405d-bd23-14fdd77b123d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.840743448Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866092840717564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79ae3c54-03b9-405d-bd23-14fdd77b123d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.841467164Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2da5e36-655b-4fe8-9fa4-e99a8c68f243 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.841654193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2da5e36-655b-4fe8-9fa4-e99a8c68f243 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.842196141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:125af17f499512876ffc699c59e1c1e0532c93afcb1ea2c27b2e1517888f09fc,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724865977380552519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf299429f94cbaecc524ccef007c1684afa3d413c96ee350d5b7b7a7564ae6,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865948378600787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60967cc1348fa22f08c1c7531783c9ab4d3fce1260f6f98bafc9bc3a575778c2,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865943373196389,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410b3c4ea3db4339584cd4eb82d730e9e6e6d49e4e376892b22d470aa6e2076,PodSandboxId:bfd5082b0ae05c270a6f8e67f3a00a3d542697ab92c603e7803aa43927613784,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865940686214195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34999fd725a1df8ba5ccd23b0509fe69d01404843249e6fdde331b7f6db0bdf4,PodSandboxId:6582087fafb9ef2c16950f38ff11bb98283e0e074ca6bedc34eb356bcfb23cdd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724865920645557636,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71813f46ff394974f25f6692688dea8c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2aca740592a4a49eb8bb442f001c1d456905053bf247f1edf977f32b25e433,PodSandboxId:bba2e49f096e9bb4135c922b36606c2c1a2c3cd5f14bb526d65ddcfb5e76ebc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724865907571011627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3321ff37258a7c3207ea0532c2614cab4523863990fda035e34b65be3cc5beee,PodSandboxId:33cba4b4c7b39170f08eec8425a5c617ef5a2a9e0df80456d3e19635ac271aeb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724865907557400576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:a672724aec167d527a6f9bdb4cebfb4f860cba338ca1ae57a114f2b14b5f6ce0,PodSandboxId:ca10600dff018bac2cb9158c6bea69620766794d0fc918c255d092c0015526f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907607695574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe13582c483763cedf27ce6bb1c1ac3af981235b1a300df8e4103c77681267f,PodSandboxId:fe66166e98fcd147844afc97babb88eaf39edf5768a68a46cc2672e4b297e13a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865907392322859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083c1edf6582c4c38c688224f753b28df8557830f500994b577421a7b9bc5e50,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724865907498672676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53addc0306f8956677be2709efd18c12e46a40768c54f02b43e8df3a5a1370a5,PodSandboxId:1f8473f5c912ec84c3a700d48c08906bf151956e76518f674af1457c2417e13d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907373862306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b34b34a42087fc70cd5c8a95ec9171ecf77b41a219483cd24e17b7c48484461,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724865907322644056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092b3fd67ccf50f549e9fdc5831e230ef405283936254cd4f930aed6a8da6889,PodSandboxId:df5ad7c301f159540a7d6c0241adcbdb843a6484963339b9dfaf12a812a457ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865907268385382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109d
f393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95e4e712d6d3ea37be32d3acca6726f64204d87ebf9a1d92514340845294696,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724865907096414405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724865423382656497,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285217770338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285212512899,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724865273106340485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724865269335973269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724865258185225034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724865258176819966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2da5e36-655b-4fe8-9fa4-e99a8c68f243 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.886484301Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4db435e2-38ad-41c2-9c0c-9efb88b0ddcb name=/runtime.v1.RuntimeService/Version
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.886579073Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4db435e2-38ad-41c2-9c0c-9efb88b0ddcb name=/runtime.v1.RuntimeService/Version
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.887782706Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf69cdb3-6b24-41ca-bccb-352998a124fe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.888388425Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866092888363314,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf69cdb3-6b24-41ca-bccb-352998a124fe name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.888889564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=107da1ae-582b-40c5-806b-260310da79fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.889017550Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=107da1ae-582b-40c5-806b-260310da79fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.890133075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:125af17f499512876ffc699c59e1c1e0532c93afcb1ea2c27b2e1517888f09fc,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724865977380552519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf299429f94cbaecc524ccef007c1684afa3d413c96ee350d5b7b7a7564ae6,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865948378600787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60967cc1348fa22f08c1c7531783c9ab4d3fce1260f6f98bafc9bc3a575778c2,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865943373196389,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410b3c4ea3db4339584cd4eb82d730e9e6e6d49e4e376892b22d470aa6e2076,PodSandboxId:bfd5082b0ae05c270a6f8e67f3a00a3d542697ab92c603e7803aa43927613784,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865940686214195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34999fd725a1df8ba5ccd23b0509fe69d01404843249e6fdde331b7f6db0bdf4,PodSandboxId:6582087fafb9ef2c16950f38ff11bb98283e0e074ca6bedc34eb356bcfb23cdd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724865920645557636,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71813f46ff394974f25f6692688dea8c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2aca740592a4a49eb8bb442f001c1d456905053bf247f1edf977f32b25e433,PodSandboxId:bba2e49f096e9bb4135c922b36606c2c1a2c3cd5f14bb526d65ddcfb5e76ebc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724865907571011627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3321ff37258a7c3207ea0532c2614cab4523863990fda035e34b65be3cc5beee,PodSandboxId:33cba4b4c7b39170f08eec8425a5c617ef5a2a9e0df80456d3e19635ac271aeb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724865907557400576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:a672724aec167d527a6f9bdb4cebfb4f860cba338ca1ae57a114f2b14b5f6ce0,PodSandboxId:ca10600dff018bac2cb9158c6bea69620766794d0fc918c255d092c0015526f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907607695574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe13582c483763cedf27ce6bb1c1ac3af981235b1a300df8e4103c77681267f,PodSandboxId:fe66166e98fcd147844afc97babb88eaf39edf5768a68a46cc2672e4b297e13a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865907392322859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083c1edf6582c4c38c688224f753b28df8557830f500994b577421a7b9bc5e50,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724865907498672676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53addc0306f8956677be2709efd18c12e46a40768c54f02b43e8df3a5a1370a5,PodSandboxId:1f8473f5c912ec84c3a700d48c08906bf151956e76518f674af1457c2417e13d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907373862306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b34b34a42087fc70cd5c8a95ec9171ecf77b41a219483cd24e17b7c48484461,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724865907322644056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092b3fd67ccf50f549e9fdc5831e230ef405283936254cd4f930aed6a8da6889,PodSandboxId:df5ad7c301f159540a7d6c0241adcbdb843a6484963339b9dfaf12a812a457ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865907268385382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109d
f393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95e4e712d6d3ea37be32d3acca6726f64204d87ebf9a1d92514340845294696,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724865907096414405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724865423382656497,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285217770338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285212512899,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724865273106340485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724865269335973269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724865258185225034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724865258176819966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=107da1ae-582b-40c5-806b-260310da79fb name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.951424730Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d9e407f-8a9f-4039-9cf9-8dbdd716bb82 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.951540027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d9e407f-8a9f-4039-9cf9-8dbdd716bb82 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.952381222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd56c8a7-7776-47de-8d64-23d9dd75bfb9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.952844483Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866092952821097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd56c8a7-7776-47de-8d64-23d9dd75bfb9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.953530998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=849dea28-38e2-4748-812c-0b18b039a9d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.953610603Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=849dea28-38e2-4748-812c-0b18b039a9d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:28:12 ha-240486 crio[3783]: time="2024-08-28 17:28:12.954056853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:125af17f499512876ffc699c59e1c1e0532c93afcb1ea2c27b2e1517888f09fc,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724865977380552519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf299429f94cbaecc524ccef007c1684afa3d413c96ee350d5b7b7a7564ae6,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865948378600787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60967cc1348fa22f08c1c7531783c9ab4d3fce1260f6f98bafc9bc3a575778c2,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865943373196389,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410b3c4ea3db4339584cd4eb82d730e9e6e6d49e4e376892b22d470aa6e2076,PodSandboxId:bfd5082b0ae05c270a6f8e67f3a00a3d542697ab92c603e7803aa43927613784,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865940686214195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34999fd725a1df8ba5ccd23b0509fe69d01404843249e6fdde331b7f6db0bdf4,PodSandboxId:6582087fafb9ef2c16950f38ff11bb98283e0e074ca6bedc34eb356bcfb23cdd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724865920645557636,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71813f46ff394974f25f6692688dea8c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2aca740592a4a49eb8bb442f001c1d456905053bf247f1edf977f32b25e433,PodSandboxId:bba2e49f096e9bb4135c922b36606c2c1a2c3cd5f14bb526d65ddcfb5e76ebc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724865907571011627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3321ff37258a7c3207ea0532c2614cab4523863990fda035e34b65be3cc5beee,PodSandboxId:33cba4b4c7b39170f08eec8425a5c617ef5a2a9e0df80456d3e19635ac271aeb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724865907557400576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:a672724aec167d527a6f9bdb4cebfb4f860cba338ca1ae57a114f2b14b5f6ce0,PodSandboxId:ca10600dff018bac2cb9158c6bea69620766794d0fc918c255d092c0015526f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907607695574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe13582c483763cedf27ce6bb1c1ac3af981235b1a300df8e4103c77681267f,PodSandboxId:fe66166e98fcd147844afc97babb88eaf39edf5768a68a46cc2672e4b297e13a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865907392322859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083c1edf6582c4c38c688224f753b28df8557830f500994b577421a7b9bc5e50,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724865907498672676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53addc0306f8956677be2709efd18c12e46a40768c54f02b43e8df3a5a1370a5,PodSandboxId:1f8473f5c912ec84c3a700d48c08906bf151956e76518f674af1457c2417e13d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907373862306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b34b34a42087fc70cd5c8a95ec9171ecf77b41a219483cd24e17b7c48484461,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724865907322644056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092b3fd67ccf50f549e9fdc5831e230ef405283936254cd4f930aed6a8da6889,PodSandboxId:df5ad7c301f159540a7d6c0241adcbdb843a6484963339b9dfaf12a812a457ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865907268385382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109d
f393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95e4e712d6d3ea37be32d3acca6726f64204d87ebf9a1d92514340845294696,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724865907096414405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724865423382656497,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285217770338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285212512899,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724865273106340485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724865269335973269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724865258185225034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724865258176819966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=849dea28-38e2-4748-812c-0b18b039a9d2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	125af17f49951       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   0153f5f1c0471       storage-provisioner
	8aaf299429f94       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Running             kube-apiserver            3                   42b0e6e318759       kube-apiserver-ha-240486
	60967cc1348fa       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Running             kube-controller-manager   2                   ac7692d6be35f       kube-controller-manager-ha-240486
	0410b3c4ea3db       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   bfd5082b0ae05       busybox-7dff88458-tnmmz
	34999fd725a1d       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   6582087fafb9e       kube-vip-ha-240486
	a672724aec167       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   ca10600dff018       coredns-6f6b679f8f-x562s
	de2aca740592a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      3 minutes ago        Running             kube-proxy                1                   bba2e49f096e9       kube-proxy-jdnzs
	3321ff37258a7       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      3 minutes ago        Running             kindnet-cni               1                   33cba4b4c7b39       kindnet-pb8m7
	083c1edf6582c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      3 minutes ago        Exited              kube-apiserver            2                   42b0e6e318759       kube-apiserver-ha-240486
	abe13582c4837       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      3 minutes ago        Running             kube-scheduler            1                   fe66166e98fcd       kube-scheduler-ha-240486
	53addc0306f89       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago        Running             coredns                   1                   1f8473f5c912e       coredns-6f6b679f8f-wtzml
	9b34b34a42087       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      3 minutes ago        Exited              kube-controller-manager   1                   ac7692d6be35f       kube-controller-manager-ha-240486
	092b3fd67ccf5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      3 minutes ago        Running             etcd                      1                   df5ad7c301f15       etcd-ha-240486
	d95e4e712d6d3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago        Exited              storage-provisioner       3                   0153f5f1c0471       storage-provisioner
	d5a3adee06612       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   23adeed9e41e9       busybox-7dff88458-tnmmz
	687020da7d252       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   375c7b919327c       coredns-6f6b679f8f-x562s
	5171fb49fa83b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   2efd086107969       coredns-6f6b679f8f-wtzml
	a200b18d5b49f       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    13 minutes ago       Exited              kindnet-cni               0                   0d9937bfda982       kindnet-pb8m7
	5da7c6652ad91       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago       Exited              kube-proxy                0                   762e2586bed26       kube-proxy-jdnzs
	1396de2dd1902       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago       Exited              kube-scheduler            0                   98bba66b20012       kube-scheduler-ha-240486
	6006f9215c80c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago       Exited              etcd                      0                   2280901ed00fa       etcd-ha-240486
	
	
	==> coredns [5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc] <==
	[INFO] 10.244.1.2:42445 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000451167s
	[INFO] 10.244.3.2:36990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118942s
	[INFO] 10.244.3.2:49081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261149s
	[INFO] 10.244.3.2:35420 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157575s
	[INFO] 10.244.3.2:45145 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000273687s
	[INFO] 10.244.0.4:59568 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001810378s
	[INFO] 10.244.1.2:40640 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138766s
	[INFO] 10.244.1.2:36403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155827s
	[INFO] 10.244.1.2:57247 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096044s
	[INFO] 10.244.3.2:58745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021909s
	[INFO] 10.244.3.2:52666 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001012s
	[INFO] 10.244.3.2:55195 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202518s
	[INFO] 10.244.0.4:50754 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164536s
	[INFO] 10.244.0.4:52876 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113989s
	[INFO] 10.244.1.2:43752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149181s
	[INFO] 10.244.1.2:39336 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272379s
	[INFO] 10.244.1.2:54086 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000180306s
	[INFO] 10.244.1.2:35731 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186612s
	[INFO] 10.244.3.2:38396 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014603s
	[INFO] 10.244.3.2:37082 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155781s
	[INFO] 10.244.0.4:42529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117311s
	[INFO] 10.244.0.4:54981 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113539s
	[INFO] 10.244.0.4:46325 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065905s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [53addc0306f8956677be2709efd18c12e46a40768c54f02b43e8df3a5a1370a5] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:60024->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:60024->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:60018->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:60018->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342] <==
	[INFO] 10.244.1.2:45832 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227329s
	[INFO] 10.244.1.2:55717 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010110062s
	[INFO] 10.244.1.2:36777 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189682s
	[INFO] 10.244.1.2:33751 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105145s
	[INFO] 10.244.1.2:34860 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088194s
	[INFO] 10.244.3.2:43474 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001844418s
	[INFO] 10.244.3.2:42113 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123683s
	[INFO] 10.244.3.2:54119 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001316499s
	[INFO] 10.244.3.2:41393 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061254s
	[INFO] 10.244.0.4:35761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174103s
	[INFO] 10.244.0.4:35492 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135318s
	[INFO] 10.244.0.4:41816 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037492s
	[INFO] 10.244.0.4:56198 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00165456s
	[INFO] 10.244.0.4:42294 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000034332s
	[INFO] 10.244.0.4:49049 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062307s
	[INFO] 10.244.0.4:43851 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000033836s
	[INFO] 10.244.1.2:53375 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119804s
	[INFO] 10.244.3.2:50434 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105903s
	[INFO] 10.244.0.4:41203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063169s
	[INFO] 10.244.0.4:51605 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004099s
	[INFO] 10.244.3.2:53550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157853s
	[INFO] 10.244.3.2:55570 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000261867s
	[INFO] 10.244.0.4:50195 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000278101s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a672724aec167d527a6f9bdb4cebfb4f860cba338ca1ae57a114f2b14b5f6ce0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: Trace[821465324]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-Aug-2024 17:25:08.069) (total time: 17220ms):
	Trace[821465324]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host 17220ms (17:25:25.290)
	Trace[821465324]: [17.220925378s] [17.220925378s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:44568->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:44568->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-240486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T17_14_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:14:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:28:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:25:52 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:25:52 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:25:52 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:25:52 +0000   Wed, 28 Aug 2024 17:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-240486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b73dbe7f63fd4c3baf977a4b53641230
	  System UUID:                b73dbe7f-63fd-4c3b-af97-7a4b53641230
	  Boot ID:                    cb154fe5-0aad-4938-bd54-d2af34922b1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tnmmz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-wtzml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 coredns-6f6b679f8f-x562s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     13m
	  kube-system                 etcd-ha-240486                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         13m
	  kube-system                 kindnet-pb8m7                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-apiserver-ha-240486             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-240486    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-jdnzs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-240486             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-240486                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m22s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-240486 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-240486 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-240486 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-240486 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Warning  ContainerGCFailed        3m49s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             3m10s (x3 over 3m59s)  kubelet          Node ha-240486 status is now: NodeNotReady
	  Normal   RegisteredNode           2m23s                  node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal   RegisteredNode           2m20s                  node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal   RegisteredNode           35s                    node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	
	
	Name:               ha-240486-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_15_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:15:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:28:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:26:37 +0000   Wed, 28 Aug 2024 17:25:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:26:37 +0000   Wed, 28 Aug 2024 17:25:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:26:37 +0000   Wed, 28 Aug 2024 17:25:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:26:37 +0000   Wed, 28 Aug 2024 17:25:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-240486-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9be8698d6a9a4f2dbc236b4faf8196d2
	  System UUID:                9be8698d-6a9a-4f2d-bc23-6b4faf8196d2
	  Boot ID:                    90d651b1-e0cc-4ce0-b518-997ffe0b527a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5pjcm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-240486-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-q9q9q                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-240486-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-240486-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4w7tt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-240486-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-240486-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 2m18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-240486-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     12m                    cidrAllocator    Node ha-240486-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-240486-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-240486-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  NodeNotReady             9m26s                  node-controller  Node ha-240486-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    2m45s (x8 over 2m45s)  kubelet          Node ha-240486-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m45s (x8 over 2m45s)  kubelet          Node ha-240486-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m45s (x7 over 2m45s)  kubelet          Node ha-240486-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m23s                  node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           35s                    node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	
	
	Name:               ha-240486-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_16_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:16:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:28:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:27:49 +0000   Wed, 28 Aug 2024 17:27:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:27:49 +0000   Wed, 28 Aug 2024 17:27:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:27:49 +0000   Wed, 28 Aug 2024 17:27:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:27:49 +0000   Wed, 28 Aug 2024 17:27:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.28
	  Hostname:    ha-240486-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b793a5caef8d481e8356b8025697789a
	  System UUID:                b793a5ca-ef8d-481e-8356-b8025697789a
	  Boot ID:                    be03108e-cbb8-43fb-9c8a-6ab01007d6db
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dtp5b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-240486-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-bgr7f                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-240486-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-240486-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-ktw9z                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-240486-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-240486-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 38s                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     11m                cidrAllocator    Node ha-240486-m03 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-240486-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-240486-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-240486-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	  Normal   RegisteredNode           2m23s              node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	  Normal   RegisteredNode           2m20s              node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	  Normal   NodeNotReady             103s               node-controller  Node ha-240486-m03 status is now: NodeNotReady
	  Normal   Starting                 55s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  55s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 55s                kubelet          Node ha-240486-m03 has been rebooted, boot id: be03108e-cbb8-43fb-9c8a-6ab01007d6db
	  Normal   NodeHasSufficientMemory  55s (x2 over 55s)  kubelet          Node ha-240486-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x2 over 55s)  kubelet          Node ha-240486-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x2 over 55s)  kubelet          Node ha-240486-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                55s                kubelet          Node ha-240486-m03 status is now: NodeReady
	  Normal   RegisteredNode           35s                node-controller  Node ha-240486-m03 event: Registered Node ha-240486-m03 in Controller
	
	
	Name:               ha-240486-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_17_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:17:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:28:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:28:05 +0000   Wed, 28 Aug 2024 17:28:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:28:05 +0000   Wed, 28 Aug 2024 17:28:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:28:05 +0000   Wed, 28 Aug 2024 17:28:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:28:05 +0000   Wed, 28 Aug 2024 17:28:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    ha-240486-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2dbc2f47ba234abeb085dbeb264b66eb
	  System UUID:                2dbc2f47-ba23-4abe-b085-dbeb264b66eb
	  Boot ID:                    f9dcd4b8-7f25-4cbf-a4a4-bcae5a976c12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gngl7       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-jlk49    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   CIDRAssignmentFailed     10m                cidrAllocator    Node ha-240486-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-240486-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-240486-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-240486-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-240486-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m23s              node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   RegisteredNode           2m20s              node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   NodeNotReady             103s               node-controller  Node ha-240486-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           35s                node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-240486-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-240486-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-240486-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-240486-m04 has been rebooted, boot id: f9dcd4b8-7f25-4cbf-a4a4-bcae5a976c12
	  Normal   NodeReady                8s                 kubelet          Node ha-240486-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +5.828086] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.054618] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049350] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.168729] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.141709] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.274713] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.746562] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.326521] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.055344] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.078869] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.096594] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.273661] kauditd_printk_skb: 28 callbacks suppressed
	[ +15.597500] kauditd_printk_skb: 31 callbacks suppressed
	[Aug28 17:15] kauditd_printk_skb: 26 callbacks suppressed
	[Aug28 17:21] kauditd_printk_skb: 1 callbacks suppressed
	[Aug28 17:25] systemd-fstab-generator[3546]: Ignoring "noauto" option for root device
	[  +0.149015] systemd-fstab-generator[3558]: Ignoring "noauto" option for root device
	[  +0.181285] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[  +0.151663] systemd-fstab-generator[3584]: Ignoring "noauto" option for root device
	[  +0.359143] systemd-fstab-generator[3663]: Ignoring "noauto" option for root device
	[  +0.849767] systemd-fstab-generator[3866]: Ignoring "noauto" option for root device
	[  +3.408696] kauditd_printk_skb: 222 callbacks suppressed
	[ +17.949668] kauditd_printk_skb: 1 callbacks suppressed
	[ +22.440144] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.673988] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [092b3fd67ccf50f549e9fdc5831e230ef405283936254cd4f930aed6a8da6889] <==
	{"level":"warn","ts":"2024-08-28T17:27:16.251013Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.28:2380/version","remote-member-id":"f1587dbaa7d9fdc3","error":"Get \"https://192.168.39.28:2380/version\": dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:16.251068Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f1587dbaa7d9fdc3","error":"Get \"https://192.168.39.28:2380/version\": dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:18.103778Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f1587dbaa7d9fdc3","rtt":"0s","error":"dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:18.103781Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f1587dbaa7d9fdc3","rtt":"0s","error":"dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:20.253564Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.28:2380/version","remote-member-id":"f1587dbaa7d9fdc3","error":"Get \"https://192.168.39.28:2380/version\": dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:20.253697Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f1587dbaa7d9fdc3","error":"Get \"https://192.168.39.28:2380/version\": dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:23.104838Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f1587dbaa7d9fdc3","rtt":"0s","error":"dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:23.105044Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f1587dbaa7d9fdc3","rtt":"0s","error":"dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:24.256014Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.28:2380/version","remote-member-id":"f1587dbaa7d9fdc3","error":"Get \"https://192.168.39.28:2380/version\": dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:24.256074Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f1587dbaa7d9fdc3","error":"Get \"https://192.168.39.28:2380/version\": dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:28.105999Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"f1587dbaa7d9fdc3","rtt":"0s","error":"dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:28.106146Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"f1587dbaa7d9fdc3","rtt":"0s","error":"dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:28.258691Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.28:2380/version","remote-member-id":"f1587dbaa7d9fdc3","error":"Get \"https://192.168.39.28:2380/version\": dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-28T17:27:28.258826Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f1587dbaa7d9fdc3","error":"Get \"https://192.168.39.28:2380/version\": dial tcp 192.168.39.28:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-28T17:27:30.246804Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:27:30.246871Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:27:30.253667Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:27:30.267680Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bcb2eab2b5d0a9fc","to":"f1587dbaa7d9fdc3","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-28T17:27:30.267740Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:27:30.268233Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"bcb2eab2b5d0a9fc","to":"f1587dbaa7d9fdc3","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-28T17:27:30.268358Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:27:34.050440Z","caller":"traceutil/trace.go:171","msg":"trace[1681486824] transaction","detail":"{read_only:false; response_revision:2469; number_of_response:1; }","duration":"103.92892ms","start":"2024-08-28T17:27:33.946449Z","end":"2024-08-28T17:27:34.050378Z","steps":["trace[1681486824] 'process raft request'  (duration: 92.868093ms)","trace[1681486824] 'compare'  (duration: 10.955746ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T17:27:41.087275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.018555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-240486-m03\" ","response":"range_response_count:1 size:3874"}
	{"level":"info","ts":"2024-08-28T17:27:41.087404Z","caller":"traceutil/trace.go:171","msg":"trace[1494469519] range","detail":"{range_begin:/registry/minions/ha-240486-m03; range_end:; response_count:1; response_revision:2506; }","duration":"102.207217ms","start":"2024-08-28T17:27:40.985185Z","end":"2024-08-28T17:27:41.087393Z","steps":["trace[1494469519] 'range keys from in-memory index tree'  (duration: 101.180297ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:28:09.403090Z","caller":"traceutil/trace.go:171","msg":"trace[508714150] transaction","detail":"{read_only:false; response_revision:2603; number_of_response:1; }","duration":"117.778059ms","start":"2024-08-28T17:28:09.285299Z","end":"2024-08-28T17:28:09.403077Z","steps":["trace[508714150] 'process raft request'  (duration: 117.664461ms)"],"step_count":1}
	
	
	==> etcd [6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594] <==
	{"level":"warn","ts":"2024-08-28T17:23:31.485846Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T17:23:24.217550Z","time spent":"7.266994739s","remote":"127.0.0.1:41610","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 "}
	2024/08/28 17:23:31 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-28T17:23:31.548052Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.227:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-28T17:23:31.548136Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.227:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-28T17:23:31.548228Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"bcb2eab2b5d0a9fc","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-28T17:23:31.548380Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548420Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548474Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548597Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548654Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548722Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548752Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548775Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.548812Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.548883Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.549049Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.549123Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.549218Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.549253Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.551239Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.227:2380"}
	{"level":"warn","ts":"2024-08-28T17:23:31.551271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.838826413s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-28T17:23:31.551312Z","caller":"traceutil/trace.go:171","msg":"trace[857817843] range","detail":"{range_begin:; range_end:; }","duration":"1.838884074s","start":"2024-08-28T17:23:29.712420Z","end":"2024-08-28T17:23:31.551304Z","steps":["trace[857817843] 'agreement among raft nodes before linearized reading'  (duration: 1.838824814s)"],"step_count":1}
	{"level":"error","ts":"2024-08-28T17:23:31.551344Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-28T17:23:31.551453Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.227:2380"}
	{"level":"info","ts":"2024-08-28T17:23:31.551716Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-240486","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.227:2380"],"advertise-client-urls":["https://192.168.39.227:2379"]}
	
	
	==> kernel <==
	 17:28:13 up 14 min,  0 users,  load average: 1.09, 0.86, 0.45
	Linux ha-240486 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3321ff37258a7c3207ea0532c2614cab4523863990fda035e34b65be3cc5beee] <==
	I0828 17:27:38.684797       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:27:48.686079       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:27:48.686232       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:27:48.686400       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:27:48.686429       1 main.go:299] handling current node
	I0828 17:27:48.686477       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:27:48.686495       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:27:48.686816       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:27:48.686865       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:27:58.685115       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:27:58.685270       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:27:58.685525       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:27:58.685567       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:27:58.685672       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:27:58.685699       1 main.go:299] handling current node
	I0828 17:27:58.685732       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:27:58.685755       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:28:08.677590       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:28:08.677805       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:28:08.678073       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:28:08.678135       1 main.go:299] handling current node
	I0828 17:28:08.678171       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:28:08.678198       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:28:08.678309       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:28:08.678358       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79] <==
	I0828 17:22:54.032843       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:23:04.033255       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:23:04.033301       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:23:04.033484       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:23:04.033508       1 main.go:299] handling current node
	I0828 17:23:04.033523       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:23:04.033529       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:23:04.033592       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:23:04.033609       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:23:14.040003       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:23:14.040048       1 main.go:299] handling current node
	I0828 17:23:14.040063       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:23:14.040090       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:23:14.040260       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:23:14.040296       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:23:14.040358       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:23:14.040376       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:23:24.037655       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:23:24.037699       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:23:24.037854       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:23:24.037873       1 main.go:299] handling current node
	I0828 17:23:24.037885       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:23:24.037890       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:23:24.038004       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:23:24.038022       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [083c1edf6582c4c38c688224f753b28df8557830f500994b577421a7b9bc5e50] <==
	I0828 17:25:08.097050       1 options.go:228] external host was not specified, using 192.168.39.227
	I0828 17:25:08.103278       1 server.go:142] Version: v1.31.0
	I0828 17:25:08.103326       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:25:08.487285       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0828 17:25:08.513346       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0828 17:25:08.517218       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0828 17:25:08.517431       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0828 17:25:08.517684       1 instance.go:232] Using reconciler: lease
	W0828 17:25:28.484744       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0828 17:25:28.485089       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0828 17:25:28.518880       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0828 17:25:28.518993       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [8aaf299429f94cbaecc524ccef007c1684afa3d413c96ee350d5b7b7a7564ae6] <==
	I0828 17:25:50.290552       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0828 17:25:50.364658       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0828 17:25:50.364784       1 policy_source.go:224] refreshing policies
	I0828 17:25:50.366856       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0828 17:25:50.366911       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0828 17:25:50.368196       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0828 17:25:50.368288       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0828 17:25:50.368460       1 shared_informer.go:320] Caches are synced for configmaps
	I0828 17:25:50.372255       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0828 17:25:50.373450       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0828 17:25:50.373571       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0828 17:25:50.393403       1 aggregator.go:171] initial CRD sync complete...
	I0828 17:25:50.393515       1 autoregister_controller.go:144] Starting autoregister controller
	I0828 17:25:50.393546       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0828 17:25:50.393571       1 cache.go:39] Caches are synced for autoregister controller
	I0828 17:25:50.408036       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0828 17:25:50.413188       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0828 17:25:50.433453       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103 192.168.39.28]
	I0828 17:25:50.437354       1 controller.go:615] quota admission added evaluator for: endpoints
	I0828 17:25:50.446902       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0828 17:25:50.451199       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0828 17:25:50.455527       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0828 17:25:51.287645       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0828 17:25:51.977809       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103 192.168.39.227 192.168.39.28]
	W0828 17:26:01.970405       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103 192.168.39.227]
	
	
	==> kube-controller-manager [60967cc1348fa22f08c1c7531783c9ab4d3fce1260f6f98bafc9bc3a575778c2] <==
	I0828 17:26:30.275203       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:26:30.428895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.692917ms"
	I0828 17:26:30.433845       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="126.046µs"
	I0828 17:26:33.857576       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:26:35.474777       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m03"
	I0828 17:26:37.008803       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m02"
	I0828 17:26:43.931693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m03"
	I0828 17:26:44.995624       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-qvgpk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-qvgpk\": the object has been modified; please apply your changes to the latest version and try again"
	I0828 17:26:44.996364       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"fa72a7e9-e262-4799-ae3c-67641cc33973", APIVersion:"v1", ResourceVersion:"252", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-qvgpk EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-qvgpk": the object has been modified; please apply your changes to the latest version and try again
	I0828 17:26:45.020568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="50.806072ms"
	I0828 17:26:45.020756       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="79.832µs"
	I0828 17:26:45.553811       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:27:18.563725       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m03"
	I0828 17:27:18.585861       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m03"
	I0828 17:27:18.828132       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m03"
	I0828 17:27:19.628622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.247µs"
	I0828 17:27:37.663697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.281628ms"
	I0828 17:27:37.664316       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="85.733µs"
	I0828 17:27:38.819080       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:27:38.912470       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:27:49.352322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m03"
	I0828 17:28:05.915892       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-240486-m04"
	I0828 17:28:05.916034       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:28:05.932416       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:28:08.846314       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	
	
	==> kube-controller-manager [9b34b34a42087fc70cd5c8a95ec9171ecf77b41a219483cd24e17b7c48484461] <==
	I0828 17:25:08.407376       1 serving.go:386] Generated self-signed cert in-memory
	I0828 17:25:09.124272       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0828 17:25:09.124356       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:25:09.126194       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0828 17:25:09.126411       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0828 17:25:09.126992       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0828 17:25:09.127693       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0828 17:25:29.525542       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.227:8443/healthz\": dial tcp 192.168.39.227:8443: connect: connection refused"
	
	
	==> kube-proxy [5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd] <==
	E0828 17:22:19.306147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:22.377642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:22.377854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:22.378209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:22.378366       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:22.379483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:22.379656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:28.522029       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:28.522207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:28.522600       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:28.522654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:28.523121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:28.523252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:40.810622       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:40.811185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:40.811283       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:40.811335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:40.810949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:40.811438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:59.242826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:59.243057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:59.243275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:59.243317       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:23:05.386768       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:23:05.386973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [de2aca740592a4a49eb8bb442f001c1d456905053bf247f1edf977f32b25e433] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 17:25:11.338059       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-240486\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0828 17:25:14.410703       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-240486\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0828 17:25:17.481971       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-240486\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0828 17:25:23.625668       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-240486\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0828 17:25:32.841519       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-240486\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0828 17:25:50.706254       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0828 17:25:50.706425       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 17:25:50.742014       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 17:25:50.742110       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 17:25:50.742167       1 server_linux.go:169] "Using iptables Proxier"
	I0828 17:25:50.744460       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 17:25:50.744814       1 server.go:483] "Version info" version="v1.31.0"
	I0828 17:25:50.744873       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:25:50.746666       1 config.go:197] "Starting service config controller"
	I0828 17:25:50.746743       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 17:25:50.746797       1 config.go:104] "Starting endpoint slice config controller"
	I0828 17:25:50.746827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 17:25:50.749056       1 config.go:326] "Starting node config controller"
	I0828 17:25:50.749143       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 17:25:50.847794       1 shared_informer.go:320] Caches are synced for service config
	I0828 17:25:50.847999       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 17:25:50.850138       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096] <==
	E0828 17:16:59.428003       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5pjcm\": pod busybox-7dff88458-5pjcm is already assigned to node \"ha-240486-m02\"" pod="default/busybox-7dff88458-5pjcm"
	I0828 17:16:59.428093       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5pjcm" node="ha-240486-m02"
	E0828 17:16:59.424465       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-tnmmz\": pod busybox-7dff88458-tnmmz is already assigned to node \"ha-240486\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-tnmmz" node="ha-240486-m02"
	E0828 17:16:59.428536       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e4608982-afdd-491b-8fdb-ede6a6a4167a(default/busybox-7dff88458-tnmmz) was assumed on ha-240486-m02 but assigned to ha-240486" pod="default/busybox-7dff88458-tnmmz"
	E0828 17:16:59.428571       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-tnmmz\": pod busybox-7dff88458-tnmmz is already assigned to node \"ha-240486\"" pod="default/busybox-7dff88458-tnmmz"
	I0828 17:16:59.428617       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-tnmmz" node="ha-240486"
	E0828 17:23:19.637840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0828 17:23:20.679695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0828 17:23:20.862024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0828 17:23:21.165042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0828 17:23:21.527404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0828 17:23:21.876782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0828 17:23:22.917423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0828 17:23:23.486605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0828 17:23:24.741091       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0828 17:23:25.986105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0828 17:23:26.609571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0828 17:23:26.654407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0828 17:23:27.189505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0828 17:23:27.319634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0828 17:23:28.269205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	I0828 17:23:31.434527       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0828 17:23:31.451351       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 17:23:31.450902       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0828 17:23:31.469021       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [abe13582c483763cedf27ce6bb1c1ac3af981235b1a300df8e4103c77681267f] <==
	W0828 17:25:39.349543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.227:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:39.349605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.227:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:45.556081       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.227:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:45.556135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.227:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:45.886163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.227:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:45.886680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.227:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:46.398464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.227:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:46.398505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.227:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:46.467826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.227:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:46.467885       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.227:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:46.823431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.227:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:46.823503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.227:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:46.927758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:46.927848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:48.248211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.227:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:48.248306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.227:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:48.431170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:48.431209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:50.297495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 17:25:50.298298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:25:50.298480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0828 17:25:50.298529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:25:50.310619       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 17:25:50.310749       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0828 17:26:05.934175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 17:26:44 ha-240486 kubelet[1308]: E0828 17:26:44.557225    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866004556809605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:26:44 ha-240486 kubelet[1308]: E0828 17:26:44.557253    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866004556809605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:26:48 ha-240486 kubelet[1308]: I0828 17:26:48.364061    1308 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-240486" podUID="f1caf9b0-cb2f-462f-be58-ee158739bb79"
	Aug 28 17:26:48 ha-240486 kubelet[1308]: I0828 17:26:48.384086    1308 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-240486"
	Aug 28 17:26:54 ha-240486 kubelet[1308]: E0828 17:26:54.560403    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866014559698030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:26:54 ha-240486 kubelet[1308]: E0828 17:26:54.560456    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866014559698030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:04 ha-240486 kubelet[1308]: E0828 17:27:04.562298    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866024561844221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:04 ha-240486 kubelet[1308]: E0828 17:27:04.562625    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866024561844221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:14 ha-240486 kubelet[1308]: E0828 17:27:14.567227    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866034566791725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:14 ha-240486 kubelet[1308]: E0828 17:27:14.567278    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866034566791725,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:24 ha-240486 kubelet[1308]: E0828 17:27:24.386459    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 17:27:24 ha-240486 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 17:27:24 ha-240486 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 17:27:24 ha-240486 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:27:24 ha-240486 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:27:24 ha-240486 kubelet[1308]: E0828 17:27:24.573503    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866044573023457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:24 ha-240486 kubelet[1308]: E0828 17:27:24.573644    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866044573023457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:34 ha-240486 kubelet[1308]: E0828 17:27:34.575624    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866054574840285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:34 ha-240486 kubelet[1308]: E0828 17:27:34.575679    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866054574840285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:44 ha-240486 kubelet[1308]: E0828 17:27:44.578730    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866064578496015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:44 ha-240486 kubelet[1308]: E0828 17:27:44.578771    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866064578496015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:54 ha-240486 kubelet[1308]: E0828 17:27:54.580894    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866074580403962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:27:54 ha-240486 kubelet[1308]: E0828 17:27:54.581364    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866074580403962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:28:04 ha-240486 kubelet[1308]: E0828 17:28:04.583648    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866084583218168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:28:04 ha-240486 kubelet[1308]: E0828 17:28:04.583687    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866084583218168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 17:28:12.487528   37304 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19529-10317/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-240486 -n ha-240486
helpers_test.go:261: (dbg) Run:  kubectl --context ha-240486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (406.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 stop -v=7 --alsologtostderr
E0828 17:29:23.524164   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-240486 stop -v=7 --alsologtostderr: exit status 82 (2m0.463661858s)

                                                
                                                
-- stdout --
	* Stopping node "ha-240486-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:28:31.697847   37713 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:28:31.697980   37713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:28:31.697991   37713 out.go:358] Setting ErrFile to fd 2...
	I0828 17:28:31.697998   37713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:28:31.698201   37713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:28:31.698411   37713 out.go:352] Setting JSON to false
	I0828 17:28:31.698480   37713 mustload.go:65] Loading cluster: ha-240486
	I0828 17:28:31.698838   37713 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:28:31.698913   37713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:28:31.699087   37713 mustload.go:65] Loading cluster: ha-240486
	I0828 17:28:31.699209   37713 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:28:31.699230   37713 stop.go:39] StopHost: ha-240486-m04
	I0828 17:28:31.699582   37713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:28:31.699629   37713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:28:31.714384   37713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35739
	I0828 17:28:31.714904   37713 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:28:31.715528   37713 main.go:141] libmachine: Using API Version  1
	I0828 17:28:31.715552   37713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:28:31.715837   37713 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:28:31.718208   37713 out.go:177] * Stopping node "ha-240486-m04"  ...
	I0828 17:28:31.719466   37713 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0828 17:28:31.719500   37713 main.go:141] libmachine: (ha-240486-m04) Calling .DriverName
	I0828 17:28:31.719676   37713 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0828 17:28:31.719697   37713 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHHostname
	I0828 17:28:31.722761   37713 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:28:31.723175   37713 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:28:00 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:28:31.723197   37713 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:28:31.723364   37713 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHPort
	I0828 17:28:31.723528   37713 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHKeyPath
	I0828 17:28:31.723708   37713 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHUsername
	I0828 17:28:31.723864   37713 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m04/id_rsa Username:docker}
	I0828 17:28:31.805391   37713 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0828 17:28:31.859267   37713 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0828 17:28:31.913071   37713 main.go:141] libmachine: Stopping "ha-240486-m04"...
	I0828 17:28:31.913096   37713 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:28:31.914816   37713 main.go:141] libmachine: (ha-240486-m04) Calling .Stop
	I0828 17:28:31.918177   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 0/120
	I0828 17:28:32.919563   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 1/120
	I0828 17:28:33.921268   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 2/120
	I0828 17:28:34.922723   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 3/120
	I0828 17:28:35.924135   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 4/120
	I0828 17:28:36.926470   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 5/120
	I0828 17:28:37.928610   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 6/120
	I0828 17:28:38.929948   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 7/120
	I0828 17:28:39.931794   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 8/120
	I0828 17:28:40.933175   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 9/120
	I0828 17:28:41.934936   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 10/120
	I0828 17:28:42.936176   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 11/120
	I0828 17:28:43.937553   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 12/120
	I0828 17:28:44.938773   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 13/120
	I0828 17:28:45.940760   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 14/120
	I0828 17:28:46.942674   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 15/120
	I0828 17:28:47.944411   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 16/120
	I0828 17:28:48.946297   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 17/120
	I0828 17:28:49.947487   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 18/120
	I0828 17:28:50.949119   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 19/120
	I0828 17:28:51.951191   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 20/120
	I0828 17:28:52.952677   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 21/120
	I0828 17:28:53.954017   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 22/120
	I0828 17:28:54.955414   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 23/120
	I0828 17:28:55.957047   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 24/120
	I0828 17:28:56.959069   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 25/120
	I0828 17:28:57.960369   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 26/120
	I0828 17:28:58.961614   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 27/120
	I0828 17:28:59.962811   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 28/120
	I0828 17:29:00.964408   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 29/120
	I0828 17:29:01.966486   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 30/120
	I0828 17:29:02.968499   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 31/120
	I0828 17:29:03.969832   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 32/120
	I0828 17:29:04.971113   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 33/120
	I0828 17:29:05.972602   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 34/120
	I0828 17:29:06.974384   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 35/120
	I0828 17:29:07.976456   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 36/120
	I0828 17:29:08.977962   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 37/120
	I0828 17:29:09.979174   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 38/120
	I0828 17:29:10.980490   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 39/120
	I0828 17:29:11.982711   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 40/120
	I0828 17:29:12.984775   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 41/120
	I0828 17:29:13.987038   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 42/120
	I0828 17:29:14.988462   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 43/120
	I0828 17:29:15.989687   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 44/120
	I0828 17:29:16.991204   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 45/120
	I0828 17:29:17.993065   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 46/120
	I0828 17:29:18.994389   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 47/120
	I0828 17:29:19.995669   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 48/120
	I0828 17:29:20.997021   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 49/120
	I0828 17:29:21.999278   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 50/120
	I0828 17:29:23.001348   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 51/120
	I0828 17:29:24.002620   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 52/120
	I0828 17:29:25.004649   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 53/120
	I0828 17:29:26.006201   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 54/120
	I0828 17:29:27.008055   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 55/120
	I0828 17:29:28.009352   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 56/120
	I0828 17:29:29.010684   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 57/120
	I0828 17:29:30.011890   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 58/120
	I0828 17:29:31.013206   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 59/120
	I0828 17:29:32.015251   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 60/120
	I0828 17:29:33.016635   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 61/120
	I0828 17:29:34.017792   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 62/120
	I0828 17:29:35.019134   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 63/120
	I0828 17:29:36.020441   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 64/120
	I0828 17:29:37.022325   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 65/120
	I0828 17:29:38.023733   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 66/120
	I0828 17:29:39.026181   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 67/120
	I0828 17:29:40.027445   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 68/120
	I0828 17:29:41.028786   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 69/120
	I0828 17:29:42.030924   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 70/120
	I0828 17:29:43.032409   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 71/120
	I0828 17:29:44.034291   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 72/120
	I0828 17:29:45.035678   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 73/120
	I0828 17:29:46.037226   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 74/120
	I0828 17:29:47.038905   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 75/120
	I0828 17:29:48.040577   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 76/120
	I0828 17:29:49.042839   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 77/120
	I0828 17:29:50.044622   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 78/120
	I0828 17:29:51.046183   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 79/120
	I0828 17:29:52.048071   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 80/120
	I0828 17:29:53.049268   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 81/120
	I0828 17:29:54.050423   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 82/120
	I0828 17:29:55.052495   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 83/120
	I0828 17:29:56.053831   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 84/120
	I0828 17:29:57.055979   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 85/120
	I0828 17:29:58.057158   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 86/120
	I0828 17:29:59.058430   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 87/120
	I0828 17:30:00.060623   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 88/120
	I0828 17:30:01.061870   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 89/120
	I0828 17:30:02.063966   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 90/120
	I0828 17:30:03.065391   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 91/120
	I0828 17:30:04.067458   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 92/120
	I0828 17:30:05.069083   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 93/120
	I0828 17:30:06.070721   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 94/120
	I0828 17:30:07.072595   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 95/120
	I0828 17:30:08.074135   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 96/120
	I0828 17:30:09.075732   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 97/120
	I0828 17:30:10.077031   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 98/120
	I0828 17:30:11.078481   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 99/120
	I0828 17:30:12.080778   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 100/120
	I0828 17:30:13.082346   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 101/120
	I0828 17:30:14.084514   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 102/120
	I0828 17:30:15.086171   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 103/120
	I0828 17:30:16.087552   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 104/120
	I0828 17:30:17.089376   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 105/120
	I0828 17:30:18.090702   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 106/120
	I0828 17:30:19.092073   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 107/120
	I0828 17:30:20.093946   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 108/120
	I0828 17:30:21.095502   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 109/120
	I0828 17:30:22.097376   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 110/120
	I0828 17:30:23.098873   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 111/120
	I0828 17:30:24.100189   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 112/120
	I0828 17:30:25.101503   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 113/120
	I0828 17:30:26.102876   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 114/120
	I0828 17:30:27.104489   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 115/120
	I0828 17:30:28.107115   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 116/120
	I0828 17:30:29.108528   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 117/120
	I0828 17:30:30.109924   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 118/120
	I0828 17:30:31.111272   37713 main.go:141] libmachine: (ha-240486-m04) Waiting for machine to stop 119/120
	I0828 17:30:32.112403   37713 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0828 17:30:32.112452   37713 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0828 17:30:32.114274   37713 out.go:201] 
	W0828 17:30:32.115459   37713 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0828 17:30:32.115473   37713 out.go:270] * 
	* 
	W0828 17:30:32.117644   37713 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 17:30:32.119062   37713 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-240486 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
E0828 17:30:46.590256   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr: exit status 3 (18.961846021s)

                                                
                                                
-- stdout --
	ha-240486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-240486-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:30:32.162118   38156 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:30:32.162232   38156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:30:32.162240   38156 out.go:358] Setting ErrFile to fd 2...
	I0828 17:30:32.162244   38156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:30:32.162439   38156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:30:32.162596   38156 out.go:352] Setting JSON to false
	I0828 17:30:32.162616   38156 mustload.go:65] Loading cluster: ha-240486
	I0828 17:30:32.162702   38156 notify.go:220] Checking for updates...
	I0828 17:30:32.162960   38156 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:30:32.162973   38156 status.go:255] checking status of ha-240486 ...
	I0828 17:30:32.163339   38156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:30:32.163378   38156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:30:32.182637   38156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I0828 17:30:32.183002   38156 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:30:32.183510   38156 main.go:141] libmachine: Using API Version  1
	I0828 17:30:32.183529   38156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:30:32.183943   38156 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:30:32.184135   38156 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:30:32.185586   38156 status.go:330] ha-240486 host status = "Running" (err=<nil>)
	I0828 17:30:32.185602   38156 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:30:32.185892   38156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:30:32.185922   38156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:30:32.200832   38156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46715
	I0828 17:30:32.201208   38156 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:30:32.201655   38156 main.go:141] libmachine: Using API Version  1
	I0828 17:30:32.201680   38156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:30:32.201978   38156 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:30:32.202166   38156 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:30:32.204920   38156 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:30:32.205333   38156 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:30:32.205364   38156 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:30:32.205490   38156 host.go:66] Checking if "ha-240486" exists ...
	I0828 17:30:32.205810   38156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:30:32.205846   38156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:30:32.220519   38156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42025
	I0828 17:30:32.220939   38156 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:30:32.221423   38156 main.go:141] libmachine: Using API Version  1
	I0828 17:30:32.221454   38156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:30:32.221782   38156 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:30:32.221966   38156 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:30:32.222201   38156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:30:32.222237   38156 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:30:32.225420   38156 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:30:32.225860   38156 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:30:32.225881   38156 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:30:32.226095   38156 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:30:32.226239   38156 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:30:32.226450   38156 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:30:32.226658   38156 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:30:32.311359   38156 ssh_runner.go:195] Run: systemctl --version
	I0828 17:30:32.318312   38156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:30:32.337594   38156 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:30:32.337623   38156 api_server.go:166] Checking apiserver status ...
	I0828 17:30:32.337659   38156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:30:32.353025   38156 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5041/cgroup
	W0828 17:30:32.362317   38156 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5041/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:30:32.362364   38156 ssh_runner.go:195] Run: ls
	I0828 17:30:32.366629   38156 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:30:32.370732   38156 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:30:32.370760   38156 status.go:422] ha-240486 apiserver status = Running (err=<nil>)
	I0828 17:30:32.370771   38156 status.go:257] ha-240486 status: &{Name:ha-240486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:30:32.370796   38156 status.go:255] checking status of ha-240486-m02 ...
	I0828 17:30:32.371088   38156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:30:32.371124   38156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:30:32.385827   38156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42689
	I0828 17:30:32.386267   38156 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:30:32.386693   38156 main.go:141] libmachine: Using API Version  1
	I0828 17:30:32.386710   38156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:30:32.387057   38156 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:30:32.387224   38156 main.go:141] libmachine: (ha-240486-m02) Calling .GetState
	I0828 17:30:32.389001   38156 status.go:330] ha-240486-m02 host status = "Running" (err=<nil>)
	I0828 17:30:32.389017   38156 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:30:32.389318   38156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:30:32.389358   38156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:30:32.404773   38156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0828 17:30:32.405201   38156 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:30:32.405672   38156 main.go:141] libmachine: Using API Version  1
	I0828 17:30:32.405693   38156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:30:32.406012   38156 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:30:32.406217   38156 main.go:141] libmachine: (ha-240486-m02) Calling .GetIP
	I0828 17:30:32.409093   38156 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:30:32.409513   38156 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:25:16 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:30:32.409544   38156 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:30:32.409713   38156 host.go:66] Checking if "ha-240486-m02" exists ...
	I0828 17:30:32.410059   38156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:30:32.410116   38156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:30:32.424677   38156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40611
	I0828 17:30:32.425115   38156 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:30:32.425634   38156 main.go:141] libmachine: Using API Version  1
	I0828 17:30:32.425652   38156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:30:32.425920   38156 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:30:32.426136   38156 main.go:141] libmachine: (ha-240486-m02) Calling .DriverName
	I0828 17:30:32.426319   38156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:30:32.426347   38156 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHHostname
	I0828 17:30:32.428805   38156 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:30:32.429201   38156 main.go:141] libmachine: (ha-240486-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:68:04", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:25:16 +0000 UTC Type:0 Mac:52:54:00:b3:68:04 Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:ha-240486-m02 Clientid:01:52:54:00:b3:68:04}
	I0828 17:30:32.429233   38156 main.go:141] libmachine: (ha-240486-m02) DBG | domain ha-240486-m02 has defined IP address 192.168.39.103 and MAC address 52:54:00:b3:68:04 in network mk-ha-240486
	I0828 17:30:32.429401   38156 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHPort
	I0828 17:30:32.429525   38156 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHKeyPath
	I0828 17:30:32.429688   38156 main.go:141] libmachine: (ha-240486-m02) Calling .GetSSHUsername
	I0828 17:30:32.429843   38156 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m02/id_rsa Username:docker}
	I0828 17:30:32.511262   38156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:30:32.529395   38156 kubeconfig.go:125] found "ha-240486" server: "https://192.168.39.254:8443"
	I0828 17:30:32.529426   38156 api_server.go:166] Checking apiserver status ...
	I0828 17:30:32.529471   38156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:30:32.547244   38156 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1402/cgroup
	W0828 17:30:32.556620   38156 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1402/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:30:32.556687   38156 ssh_runner.go:195] Run: ls
	I0828 17:30:32.561589   38156 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0828 17:30:32.565733   38156 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0828 17:30:32.565754   38156 status.go:422] ha-240486-m02 apiserver status = Running (err=<nil>)
	I0828 17:30:32.565762   38156 status.go:257] ha-240486-m02 status: &{Name:ha-240486-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:30:32.565776   38156 status.go:255] checking status of ha-240486-m04 ...
	I0828 17:30:32.566057   38156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:30:32.566113   38156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:30:32.581094   38156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
	I0828 17:30:32.581489   38156 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:30:32.581865   38156 main.go:141] libmachine: Using API Version  1
	I0828 17:30:32.581886   38156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:30:32.582215   38156 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:30:32.582362   38156 main.go:141] libmachine: (ha-240486-m04) Calling .GetState
	I0828 17:30:32.583890   38156 status.go:330] ha-240486-m04 host status = "Running" (err=<nil>)
	I0828 17:30:32.583906   38156 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:30:32.584310   38156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:30:32.584352   38156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:30:32.598831   38156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42905
	I0828 17:30:32.599278   38156 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:30:32.599665   38156 main.go:141] libmachine: Using API Version  1
	I0828 17:30:32.599693   38156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:30:32.599972   38156 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:30:32.600134   38156 main.go:141] libmachine: (ha-240486-m04) Calling .GetIP
	I0828 17:30:32.602796   38156 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:30:32.603201   38156 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:28:00 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:30:32.603227   38156 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:30:32.603369   38156 host.go:66] Checking if "ha-240486-m04" exists ...
	I0828 17:30:32.603768   38156 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:30:32.603809   38156 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:30:32.618913   38156 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0828 17:30:32.619298   38156 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:30:32.619770   38156 main.go:141] libmachine: Using API Version  1
	I0828 17:30:32.619798   38156 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:30:32.620120   38156 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:30:32.620318   38156 main.go:141] libmachine: (ha-240486-m04) Calling .DriverName
	I0828 17:30:32.620532   38156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:30:32.620555   38156 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHHostname
	I0828 17:30:32.623260   38156 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:30:32.623715   38156 main.go:141] libmachine: (ha-240486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1f:e3:89", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:28:00 +0000 UTC Type:0 Mac:52:54:00:1f:e3:89 Iaid: IPaddr:192.168.39.125 Prefix:24 Hostname:ha-240486-m04 Clientid:01:52:54:00:1f:e3:89}
	I0828 17:30:32.623751   38156 main.go:141] libmachine: (ha-240486-m04) DBG | domain ha-240486-m04 has defined IP address 192.168.39.125 and MAC address 52:54:00:1f:e3:89 in network mk-ha-240486
	I0828 17:30:32.623895   38156 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHPort
	I0828 17:30:32.624067   38156 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHKeyPath
	I0828 17:30:32.624214   38156 main.go:141] libmachine: (ha-240486-m04) Calling .GetSSHUsername
	I0828 17:30:32.624362   38156 sshutil.go:53] new ssh client: &{IP:192.168.39.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486-m04/id_rsa Username:docker}
	W0828 17:30:51.082339   38156 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.125:22: connect: no route to host
	W0828 17:30:51.082447   38156 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	E0828 17:30:51.082470   38156 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host
	I0828 17:30:51.082484   38156 status.go:257] ha-240486-m04 status: &{Name:ha-240486-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0828 17:30:51.082507   38156 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.125:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-240486 -n ha-240486
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-240486 logs -n 25: (1.583002494s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-240486 ssh -n ha-240486-m02 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m03_ha-240486-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04:/home/docker/cp-test_ha-240486-m03_ha-240486-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m04 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m03_ha-240486-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp testdata/cp-test.txt                                                | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3516631358/001/cp-test_ha-240486-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486:/home/docker/cp-test_ha-240486-m04_ha-240486.txt                       |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486 sudo cat                                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486.txt                                 |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m02:/home/docker/cp-test_ha-240486-m04_ha-240486-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m02 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m03:/home/docker/cp-test_ha-240486-m04_ha-240486-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n                                                                 | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | ha-240486-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-240486 ssh -n ha-240486-m03 sudo cat                                          | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC | 28 Aug 24 17:18 UTC |
	|         | /home/docker/cp-test_ha-240486-m04_ha-240486-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-240486 node stop m02 -v=7                                                     | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-240486 node start m02 -v=7                                                    | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:20 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-240486 -v=7                                                           | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-240486 -v=7                                                                | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:21 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-240486 --wait=true -v=7                                                    | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:23 UTC | 28 Aug 24 17:28 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-240486                                                                | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:28 UTC |                     |
	| node    | ha-240486 node delete m03 -v=7                                                   | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:28 UTC | 28 Aug 24 17:28 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-240486 stop -v=7                                                              | ha-240486 | jenkins | v1.33.1 | 28 Aug 24 17:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 17:23:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 17:23:30.615836   35789 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:23:30.615952   35789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:23:30.615961   35789 out.go:358] Setting ErrFile to fd 2...
	I0828 17:23:30.615965   35789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:23:30.616146   35789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:23:30.616698   35789 out.go:352] Setting JSON to false
	I0828 17:23:30.617654   35789 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3957,"bootTime":1724861854,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 17:23:30.617709   35789 start.go:139] virtualization: kvm guest
	I0828 17:23:30.619933   35789 out.go:177] * [ha-240486] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 17:23:30.621177   35789 notify.go:220] Checking for updates...
	I0828 17:23:30.621211   35789 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:23:30.622540   35789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:23:30.623980   35789 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:23:30.625233   35789 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:23:30.626281   35789 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 17:23:30.627368   35789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:23:30.628809   35789 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:23:30.628886   35789 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:23:30.629288   35789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:23:30.629340   35789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:23:30.644356   35789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0828 17:23:30.644750   35789 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:23:30.645232   35789 main.go:141] libmachine: Using API Version  1
	I0828 17:23:30.645250   35789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:23:30.645608   35789 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:23:30.645775   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:23:30.680485   35789 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 17:23:30.681636   35789 start.go:297] selected driver: kvm2
	I0828 17:23:30.681656   35789 start.go:901] validating driver "kvm2" against &{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:23:30.681801   35789 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:23:30.682131   35789 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:23:30.682195   35789 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 17:23:30.697068   35789 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 17:23:30.697970   35789 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:23:30.698035   35789 cni.go:84] Creating CNI manager for ""
	I0828 17:23:30.698046   35789 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0828 17:23:30.698140   35789 start.go:340] cluster config:
	{Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:23:30.698263   35789 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:23:30.699950   35789 out.go:177] * Starting "ha-240486" primary control-plane node in "ha-240486" cluster
	I0828 17:23:30.701019   35789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:23:30.701050   35789 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 17:23:30.701059   35789 cache.go:56] Caching tarball of preloaded images
	I0828 17:23:30.701136   35789 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 17:23:30.701148   35789 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 17:23:30.701289   35789 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/config.json ...
	I0828 17:23:30.701523   35789 start.go:360] acquireMachinesLock for ha-240486: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 17:23:30.701572   35789 start.go:364] duration metric: took 22.404µs to acquireMachinesLock for "ha-240486"
	I0828 17:23:30.701586   35789 start.go:96] Skipping create...Using existing machine configuration
	I0828 17:23:30.701596   35789 fix.go:54] fixHost starting: 
	I0828 17:23:30.701838   35789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:23:30.701869   35789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:23:30.716122   35789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35609
	I0828 17:23:30.716502   35789 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:23:30.716942   35789 main.go:141] libmachine: Using API Version  1
	I0828 17:23:30.716960   35789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:23:30.717265   35789 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:23:30.717443   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:23:30.717620   35789 main.go:141] libmachine: (ha-240486) Calling .GetState
	I0828 17:23:30.719219   35789 fix.go:112] recreateIfNeeded on ha-240486: state=Running err=<nil>
	W0828 17:23:30.719252   35789 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 17:23:30.721157   35789 out.go:177] * Updating the running kvm2 "ha-240486" VM ...
	I0828 17:23:30.722478   35789 machine.go:93] provisionDockerMachine start ...
	I0828 17:23:30.722500   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:23:30.722694   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:30.725260   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.725686   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:30.725704   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.725862   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:23:30.726011   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.726187   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.726297   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:23:30.726465   35789 main.go:141] libmachine: Using SSH client type: native
	I0828 17:23:30.726650   35789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:23:30.726662   35789 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 17:23:30.834878   35789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-240486
	
	I0828 17:23:30.834903   35789 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:23:30.835126   35789 buildroot.go:166] provisioning hostname "ha-240486"
	I0828 17:23:30.835152   35789 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:23:30.835389   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:30.837891   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.838284   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:30.838311   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.838404   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:23:30.838568   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.838730   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.838886   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:23:30.839022   35789 main.go:141] libmachine: Using SSH client type: native
	I0828 17:23:30.839189   35789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:23:30.839200   35789 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-240486 && echo "ha-240486" | sudo tee /etc/hostname
	I0828 17:23:30.962253   35789 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-240486
	
	I0828 17:23:30.962275   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:30.965694   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.966128   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:30.966153   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:30.966370   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:23:30.966551   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.966724   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:30.966870   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:23:30.967030   35789 main.go:141] libmachine: Using SSH client type: native
	I0828 17:23:30.967194   35789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:23:30.967208   35789 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-240486' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-240486/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-240486' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:23:31.074524   35789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:23:31.074552   35789 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 17:23:31.074580   35789 buildroot.go:174] setting up certificates
	I0828 17:23:31.074589   35789 provision.go:84] configureAuth start
	I0828 17:23:31.074596   35789 main.go:141] libmachine: (ha-240486) Calling .GetMachineName
	I0828 17:23:31.074880   35789 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:23:31.077723   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.078119   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:31.078140   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.078255   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:31.080489   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.080800   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:31.080825   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.080936   35789 provision.go:143] copyHostCerts
	I0828 17:23:31.080980   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:23:31.081014   35789 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 17:23:31.081028   35789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:23:31.081100   35789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 17:23:31.081180   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:23:31.081197   35789 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 17:23:31.081204   35789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:23:31.081237   35789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 17:23:31.081275   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:23:31.081291   35789 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 17:23:31.081297   35789 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:23:31.081321   35789 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 17:23:31.081379   35789 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.ha-240486 san=[127.0.0.1 192.168.39.227 ha-240486 localhost minikube]
	I0828 17:23:31.146597   35789 provision.go:177] copyRemoteCerts
	I0828 17:23:31.146677   35789 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:23:31.146709   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:31.149336   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.149690   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:31.149721   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.149840   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:23:31.149987   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:31.150129   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:23:31.150274   35789 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:23:31.233214   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0828 17:23:31.233335   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 17:23:31.263238   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0828 17:23:31.263311   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0828 17:23:31.293351   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0828 17:23:31.293424   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 17:23:31.324577   35789 provision.go:87] duration metric: took 249.97554ms to configureAuth
	I0828 17:23:31.324608   35789 buildroot.go:189] setting minikube options for container-runtime
	I0828 17:23:31.324862   35789 config.go:182] Loaded profile config "ha-240486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:23:31.324947   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:23:31.327433   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.327839   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:23:31.327865   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:23:31.328049   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:23:31.328213   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:31.328389   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:23:31.328591   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:23:31.328790   35789 main.go:141] libmachine: Using SSH client type: native
	I0828 17:23:31.329001   35789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:23:31.329016   35789 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 17:25:02.231932   35789 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 17:25:02.231958   35789 machine.go:96] duration metric: took 1m31.509466053s to provisionDockerMachine
	I0828 17:25:02.231973   35789 start.go:293] postStartSetup for "ha-240486" (driver="kvm2")
	I0828 17:25:02.231986   35789 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:25:02.232005   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.232340   35789 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:25:02.232364   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:25:02.235426   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.235956   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.235987   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.236179   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:25:02.236359   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.236538   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:25:02.236705   35789 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:25:02.321951   35789 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:25:02.326490   35789 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 17:25:02.326514   35789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 17:25:02.326593   35789 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 17:25:02.326704   35789 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 17:25:02.326715   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /etc/ssl/certs/175282.pem
	I0828 17:25:02.326821   35789 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 17:25:02.336171   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:25:02.360269   35789 start.go:296] duration metric: took 128.28075ms for postStartSetup
	I0828 17:25:02.360315   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.360596   35789 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0828 17:25:02.360623   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:25:02.362984   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.363404   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.363427   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.363568   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:25:02.363741   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.363912   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:25:02.364008   35789 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	W0828 17:25:02.448084   35789 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0828 17:25:02.448125   35789 fix.go:56] duration metric: took 1m31.746521442s for fixHost
	I0828 17:25:02.448148   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:25:02.450857   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.451363   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.451396   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.451532   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:25:02.451726   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.451895   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.452019   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:25:02.452167   35789 main.go:141] libmachine: Using SSH client type: native
	I0828 17:25:02.452344   35789 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.227 22 <nil> <nil>}
	I0828 17:25:02.452359   35789 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 17:25:02.562821   35789 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724865902.517036600
	
	I0828 17:25:02.562844   35789 fix.go:216] guest clock: 1724865902.517036600
	I0828 17:25:02.562851   35789 fix.go:229] Guest: 2024-08-28 17:25:02.5170366 +0000 UTC Remote: 2024-08-28 17:25:02.44813333 +0000 UTC m=+91.867665805 (delta=68.90327ms)
	I0828 17:25:02.562881   35789 fix.go:200] guest clock delta is within tolerance: 68.90327ms
	I0828 17:25:02.562886   35789 start.go:83] releasing machines lock for "ha-240486", held for 1m31.861305007s
	I0828 17:25:02.562904   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.563160   35789 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:25:02.565485   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.565824   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.565853   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.565971   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.566457   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.566621   35789 main.go:141] libmachine: (ha-240486) Calling .DriverName
	I0828 17:25:02.566729   35789 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:25:02.566768   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:25:02.566853   35789 ssh_runner.go:195] Run: cat /version.json
	I0828 17:25:02.566870   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHHostname
	I0828 17:25:02.569336   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.569624   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.569666   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.569683   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.569805   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:25:02.569994   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.570191   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:25:02.570283   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:02.570305   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:02.570433   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHPort
	I0828 17:25:02.570526   35789 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:25:02.570598   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHKeyPath
	I0828 17:25:02.570726   35789 main.go:141] libmachine: (ha-240486) Calling .GetSSHUsername
	I0828 17:25:02.570848   35789 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/ha-240486/id_rsa Username:docker}
	I0828 17:25:02.647267   35789 ssh_runner.go:195] Run: systemctl --version
	I0828 17:25:02.690791   35789 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 17:25:02.850816   35789 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 17:25:02.859352   35789 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 17:25:02.859422   35789 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:25:02.868271   35789 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0828 17:25:02.868297   35789 start.go:495] detecting cgroup driver to use...
	I0828 17:25:02.868360   35789 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 17:25:02.883643   35789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 17:25:02.897550   35789 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:25:02.897612   35789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:25:02.911123   35789 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:25:02.925312   35789 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:25:03.077036   35789 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:25:03.231825   35789 docker.go:233] disabling docker service ...
	I0828 17:25:03.231898   35789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:25:03.251836   35789 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:25:03.266267   35789 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:25:03.410278   35789 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:25:03.553159   35789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:25:03.566904   35789 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:25:03.584608   35789 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 17:25:03.584660   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.594989   35789 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 17:25:03.595048   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.605222   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.615401   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.625770   35789 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:25:03.636199   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.646418   35789 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.656748   35789 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:25:03.667156   35789 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:25:03.676961   35789 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:25:03.718259   35789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:25:03.990554   35789 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 17:25:04.288293   35789 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 17:25:04.288360   35789 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 17:25:04.292986   35789 start.go:563] Will wait 60s for crictl version
	I0828 17:25:04.293045   35789 ssh_runner.go:195] Run: which crictl
	I0828 17:25:04.296567   35789 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:25:04.336758   35789 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 17:25:04.336829   35789 ssh_runner.go:195] Run: crio --version
	I0828 17:25:04.365260   35789 ssh_runner.go:195] Run: crio --version
	I0828 17:25:04.397712   35789 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 17:25:04.399003   35789 main.go:141] libmachine: (ha-240486) Calling .GetIP
	I0828 17:25:04.401568   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:04.401832   35789 main.go:141] libmachine: (ha-240486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3e:e0:a1", ip: ""} in network mk-ha-240486: {Iface:virbr1 ExpiryTime:2024-08-28 18:14:02 +0000 UTC Type:0 Mac:52:54:00:3e:e0:a1 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-240486 Clientid:01:52:54:00:3e:e0:a1}
	I0828 17:25:04.401857   35789 main.go:141] libmachine: (ha-240486) DBG | domain ha-240486 has defined IP address 192.168.39.227 and MAC address 52:54:00:3e:e0:a1 in network mk-ha-240486
	I0828 17:25:04.402016   35789 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 17:25:04.406801   35789 kubeadm.go:883] updating cluster {Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 17:25:04.407085   35789 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:25:04.407150   35789 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:25:04.451120   35789 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 17:25:04.451148   35789 crio.go:433] Images already preloaded, skipping extraction
	I0828 17:25:04.451205   35789 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:25:04.483463   35789 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 17:25:04.483491   35789 cache_images.go:84] Images are preloaded, skipping loading
	I0828 17:25:04.483503   35789 kubeadm.go:934] updating node { 192.168.39.227 8443 v1.31.0 crio true true} ...
	I0828 17:25:04.483620   35789 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-240486 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.227
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:25:04.483700   35789 ssh_runner.go:195] Run: crio config
	I0828 17:25:04.531875   35789 cni.go:84] Creating CNI manager for ""
	I0828 17:25:04.531896   35789 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0828 17:25:04.531904   35789 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 17:25:04.531928   35789 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.227 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-240486 NodeName:ha-240486 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.227"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.227 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 17:25:04.532089   35789 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.227
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-240486"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.227
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.227"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 17:25:04.532119   35789 kube-vip.go:115] generating kube-vip config ...
	I0828 17:25:04.532173   35789 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0828 17:25:04.543263   35789 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0828 17:25:04.543400   35789 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0828 17:25:04.543465   35789 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 17:25:04.552303   35789 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 17:25:04.552389   35789 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0828 17:25:04.561212   35789 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0828 17:25:04.580297   35789 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:25:04.595719   35789 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0828 17:25:04.610817   35789 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0828 17:25:04.627827   35789 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0828 17:25:04.631559   35789 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:25:04.773081   35789 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:25:04.786360   35789 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486 for IP: 192.168.39.227
	I0828 17:25:04.786382   35789 certs.go:194] generating shared ca certs ...
	I0828 17:25:04.786397   35789 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:25:04.786527   35789 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 17:25:04.786571   35789 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 17:25:04.786579   35789 certs.go:256] generating profile certs ...
	I0828 17:25:04.786655   35789 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/client.key
	I0828 17:25:04.786680   35789 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.731f7cec
	I0828 17:25:04.786693   35789 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.731f7cec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.227 192.168.39.103 192.168.39.28 192.168.39.254]
	I0828 17:25:05.048941   35789 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.731f7cec ...
	I0828 17:25:05.048972   35789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.731f7cec: {Name:mk861fecc78047e15c79214d24f5e8155355b432 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:25:05.049133   35789 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.731f7cec ...
	I0828 17:25:05.049143   35789 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.731f7cec: {Name:mk8bd5a26c1a54101a89c3b0564624de3c5322d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:25:05.049211   35789 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt.731f7cec -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt
	I0828 17:25:05.049376   35789 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key.731f7cec -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key
	I0828 17:25:05.049520   35789 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key
	I0828 17:25:05.049535   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0828 17:25:05.049549   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0828 17:25:05.049563   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0828 17:25:05.049605   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0828 17:25:05.049625   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0828 17:25:05.049636   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0828 17:25:05.049652   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0828 17:25:05.049674   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0828 17:25:05.049719   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 17:25:05.049745   35789 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 17:25:05.049753   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:25:05.049773   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 17:25:05.049795   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:25:05.049814   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 17:25:05.049848   35789 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:25:05.049875   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:25:05.049889   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem -> /usr/share/ca-certificates/17528.pem
	I0828 17:25:05.049901   35789 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /usr/share/ca-certificates/175282.pem
	I0828 17:25:05.050507   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:25:05.074438   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 17:25:05.096374   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:25:05.118267   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:25:05.141006   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0828 17:25:05.162140   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 17:25:05.183370   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:25:05.205628   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/ha-240486/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 17:25:05.227312   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:25:05.250216   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 17:25:05.271912   35789 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 17:25:05.294254   35789 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 17:25:05.309339   35789 ssh_runner.go:195] Run: openssl version
	I0828 17:25:05.314817   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:25:05.324994   35789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:25:05.329186   35789 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:25:05.329252   35789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:25:05.334714   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:25:05.343753   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 17:25:05.353967   35789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 17:25:05.358695   35789 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:25:05.358751   35789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 17:25:05.364355   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 17:25:05.373901   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 17:25:05.384328   35789 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 17:25:05.388682   35789 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:25:05.388731   35789 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 17:25:05.394136   35789 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 17:25:05.403336   35789 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:25:05.407754   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 17:25:05.413381   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 17:25:05.418864   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 17:25:05.424204   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 17:25:05.429757   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 17:25:05.434929   35789 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 17:25:05.440115   35789 kubeadm.go:392] StartCluster: {Name:ha-240486 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-240486 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.227 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.103 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.28 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.125 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:25:05.440240   35789 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 17:25:05.440294   35789 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 17:25:05.475047   35789 cri.go:89] found id: "dce20397a3c454526e6cd3309071f31f943894f7e9043c84c8dd24be076b4e86"
	I0828 17:25:05.475074   35789 cri.go:89] found id: "03b8618147a9f8fe0ed74b3064a117f5a3fddbf3c0439c61314f657416e2c4ca"
	I0828 17:25:05.475080   35789 cri.go:89] found id: "0f5b811659f6edeb6d1f6de19fecaecc7791089d8c22cbfd3d3bfc30be215626"
	I0828 17:25:05.475084   35789 cri.go:89] found id: "fd86c846060e5e6db0a04c43e159d479fba1953aa54543ccfc94c815b790873e"
	I0828 17:25:05.475091   35789 cri.go:89] found id: "687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342"
	I0828 17:25:05.475096   35789 cri.go:89] found id: "5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc"
	I0828 17:25:05.475100   35789 cri.go:89] found id: "a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79"
	I0828 17:25:05.475104   35789 cri.go:89] found id: "5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd"
	I0828 17:25:05.475108   35789 cri.go:89] found id: "e264b3c2fcf6e9bcb36188bf8220e3a34460fe9740c6d4df332c937aa3d73846"
	I0828 17:25:05.475115   35789 cri.go:89] found id: "1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096"
	I0828 17:25:05.475133   35789 cri.go:89] found id: "6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594"
	I0828 17:25:05.475139   35789 cri.go:89] found id: "594ab811e29b5ad78a8c0a590f754540129775e0b65bf3fb8ac3d05808cf6dbe"
	I0828 17:25:05.475143   35789 cri.go:89] found id: "6c141f787017a9c1c78a3b63460b101bc27bc575301ce774a245737347724883"
	I0828 17:25:05.475147   35789 cri.go:89] found id: ""
	I0828 17:25:05.475202   35789 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.673870660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866251673846132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23c43f6d-b7ae-4150-910d-992fb07030b6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.674538757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=341e0d42-6f17-4961-b370-5310278801c8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.674594035Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=341e0d42-6f17-4961-b370-5310278801c8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.675236310Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:125af17f499512876ffc699c59e1c1e0532c93afcb1ea2c27b2e1517888f09fc,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724865977380552519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf299429f94cbaecc524ccef007c1684afa3d413c96ee350d5b7b7a7564ae6,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865948378600787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60967cc1348fa22f08c1c7531783c9ab4d3fce1260f6f98bafc9bc3a575778c2,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865943373196389,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410b3c4ea3db4339584cd4eb82d730e9e6e6d49e4e376892b22d470aa6e2076,PodSandboxId:bfd5082b0ae05c270a6f8e67f3a00a3d542697ab92c603e7803aa43927613784,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865940686214195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34999fd725a1df8ba5ccd23b0509fe69d01404843249e6fdde331b7f6db0bdf4,PodSandboxId:6582087fafb9ef2c16950f38ff11bb98283e0e074ca6bedc34eb356bcfb23cdd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724865920645557636,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71813f46ff394974f25f6692688dea8c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2aca740592a4a49eb8bb442f001c1d456905053bf247f1edf977f32b25e433,PodSandboxId:bba2e49f096e9bb4135c922b36606c2c1a2c3cd5f14bb526d65ddcfb5e76ebc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724865907571011627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3321ff37258a7c3207ea0532c2614cab4523863990fda035e34b65be3cc5beee,PodSandboxId:33cba4b4c7b39170f08eec8425a5c617ef5a2a9e0df80456d3e19635ac271aeb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724865907557400576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:a672724aec167d527a6f9bdb4cebfb4f860cba338ca1ae57a114f2b14b5f6ce0,PodSandboxId:ca10600dff018bac2cb9158c6bea69620766794d0fc918c255d092c0015526f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907607695574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe13582c483763cedf27ce6bb1c1ac3af981235b1a300df8e4103c77681267f,PodSandboxId:fe66166e98fcd147844afc97babb88eaf39edf5768a68a46cc2672e4b297e13a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865907392322859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083c1edf6582c4c38c688224f753b28df8557830f500994b577421a7b9bc5e50,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724865907498672676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53addc0306f8956677be2709efd18c12e46a40768c54f02b43e8df3a5a1370a5,PodSandboxId:1f8473f5c912ec84c3a700d48c08906bf151956e76518f674af1457c2417e13d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907373862306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b34b34a42087fc70cd5c8a95ec9171ecf77b41a219483cd24e17b7c48484461,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724865907322644056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092b3fd67ccf50f549e9fdc5831e230ef405283936254cd4f930aed6a8da6889,PodSandboxId:df5ad7c301f159540a7d6c0241adcbdb843a6484963339b9dfaf12a812a457ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865907268385382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109d
f393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95e4e712d6d3ea37be32d3acca6726f64204d87ebf9a1d92514340845294696,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724865907096414405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724865423382656497,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285217770338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285212512899,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724865273106340485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724865269335973269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724865258185225034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724865258176819966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=341e0d42-6f17-4961-b370-5310278801c8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.716951218Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2fdb620f-bc42-4409-b675-ad8e757287f2 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.717086462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2fdb620f-bc42-4409-b675-ad8e757287f2 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.718070030Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80fdbec9-2716-4547-bccc-2e14bbd810de name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.718528035Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866251718498440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80fdbec9-2716-4547-bccc-2e14bbd810de name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.719061759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ecedbcca-f9b3-405f-9603-1c344cc893ff name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.719115511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ecedbcca-f9b3-405f-9603-1c344cc893ff name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.719534661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:125af17f499512876ffc699c59e1c1e0532c93afcb1ea2c27b2e1517888f09fc,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724865977380552519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf299429f94cbaecc524ccef007c1684afa3d413c96ee350d5b7b7a7564ae6,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865948378600787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60967cc1348fa22f08c1c7531783c9ab4d3fce1260f6f98bafc9bc3a575778c2,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865943373196389,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410b3c4ea3db4339584cd4eb82d730e9e6e6d49e4e376892b22d470aa6e2076,PodSandboxId:bfd5082b0ae05c270a6f8e67f3a00a3d542697ab92c603e7803aa43927613784,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865940686214195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34999fd725a1df8ba5ccd23b0509fe69d01404843249e6fdde331b7f6db0bdf4,PodSandboxId:6582087fafb9ef2c16950f38ff11bb98283e0e074ca6bedc34eb356bcfb23cdd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724865920645557636,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71813f46ff394974f25f6692688dea8c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2aca740592a4a49eb8bb442f001c1d456905053bf247f1edf977f32b25e433,PodSandboxId:bba2e49f096e9bb4135c922b36606c2c1a2c3cd5f14bb526d65ddcfb5e76ebc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724865907571011627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3321ff37258a7c3207ea0532c2614cab4523863990fda035e34b65be3cc5beee,PodSandboxId:33cba4b4c7b39170f08eec8425a5c617ef5a2a9e0df80456d3e19635ac271aeb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724865907557400576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:a672724aec167d527a6f9bdb4cebfb4f860cba338ca1ae57a114f2b14b5f6ce0,PodSandboxId:ca10600dff018bac2cb9158c6bea69620766794d0fc918c255d092c0015526f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907607695574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe13582c483763cedf27ce6bb1c1ac3af981235b1a300df8e4103c77681267f,PodSandboxId:fe66166e98fcd147844afc97babb88eaf39edf5768a68a46cc2672e4b297e13a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865907392322859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083c1edf6582c4c38c688224f753b28df8557830f500994b577421a7b9bc5e50,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724865907498672676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53addc0306f8956677be2709efd18c12e46a40768c54f02b43e8df3a5a1370a5,PodSandboxId:1f8473f5c912ec84c3a700d48c08906bf151956e76518f674af1457c2417e13d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907373862306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b34b34a42087fc70cd5c8a95ec9171ecf77b41a219483cd24e17b7c48484461,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724865907322644056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092b3fd67ccf50f549e9fdc5831e230ef405283936254cd4f930aed6a8da6889,PodSandboxId:df5ad7c301f159540a7d6c0241adcbdb843a6484963339b9dfaf12a812a457ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865907268385382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109d
f393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95e4e712d6d3ea37be32d3acca6726f64204d87ebf9a1d92514340845294696,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724865907096414405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724865423382656497,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285217770338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285212512899,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724865273106340485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724865269335973269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724865258185225034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724865258176819966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ecedbcca-f9b3-405f-9603-1c344cc893ff name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.770892543Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef6459d3-1342-45c4-916b-b946a16864d4 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.771403593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef6459d3-1342-45c4-916b-b946a16864d4 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.773630493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc969fd5-24dd-49f7-b825-c4ed8c9fd819 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.774445976Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866251774409950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc969fd5-24dd-49f7-b825-c4ed8c9fd819 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.775234407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9213dd6e-b9d4-486e-8084-ee2cc8a05e89 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.775327764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9213dd6e-b9d4-486e-8084-ee2cc8a05e89 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.776093874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:125af17f499512876ffc699c59e1c1e0532c93afcb1ea2c27b2e1517888f09fc,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724865977380552519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf299429f94cbaecc524ccef007c1684afa3d413c96ee350d5b7b7a7564ae6,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865948378600787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60967cc1348fa22f08c1c7531783c9ab4d3fce1260f6f98bafc9bc3a575778c2,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865943373196389,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410b3c4ea3db4339584cd4eb82d730e9e6e6d49e4e376892b22d470aa6e2076,PodSandboxId:bfd5082b0ae05c270a6f8e67f3a00a3d542697ab92c603e7803aa43927613784,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865940686214195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34999fd725a1df8ba5ccd23b0509fe69d01404843249e6fdde331b7f6db0bdf4,PodSandboxId:6582087fafb9ef2c16950f38ff11bb98283e0e074ca6bedc34eb356bcfb23cdd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724865920645557636,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71813f46ff394974f25f6692688dea8c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2aca740592a4a49eb8bb442f001c1d456905053bf247f1edf977f32b25e433,PodSandboxId:bba2e49f096e9bb4135c922b36606c2c1a2c3cd5f14bb526d65ddcfb5e76ebc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724865907571011627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3321ff37258a7c3207ea0532c2614cab4523863990fda035e34b65be3cc5beee,PodSandboxId:33cba4b4c7b39170f08eec8425a5c617ef5a2a9e0df80456d3e19635ac271aeb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724865907557400576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:a672724aec167d527a6f9bdb4cebfb4f860cba338ca1ae57a114f2b14b5f6ce0,PodSandboxId:ca10600dff018bac2cb9158c6bea69620766794d0fc918c255d092c0015526f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907607695574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe13582c483763cedf27ce6bb1c1ac3af981235b1a300df8e4103c77681267f,PodSandboxId:fe66166e98fcd147844afc97babb88eaf39edf5768a68a46cc2672e4b297e13a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865907392322859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083c1edf6582c4c38c688224f753b28df8557830f500994b577421a7b9bc5e50,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724865907498672676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53addc0306f8956677be2709efd18c12e46a40768c54f02b43e8df3a5a1370a5,PodSandboxId:1f8473f5c912ec84c3a700d48c08906bf151956e76518f674af1457c2417e13d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907373862306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b34b34a42087fc70cd5c8a95ec9171ecf77b41a219483cd24e17b7c48484461,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724865907322644056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092b3fd67ccf50f549e9fdc5831e230ef405283936254cd4f930aed6a8da6889,PodSandboxId:df5ad7c301f159540a7d6c0241adcbdb843a6484963339b9dfaf12a812a457ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865907268385382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109d
f393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95e4e712d6d3ea37be32d3acca6726f64204d87ebf9a1d92514340845294696,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724865907096414405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724865423382656497,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285217770338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285212512899,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724865273106340485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724865269335973269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724865258185225034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724865258176819966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9213dd6e-b9d4-486e-8084-ee2cc8a05e89 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.818799299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9a923aac-161f-4d02-b231-9e10ec0e0675 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.818884132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9a923aac-161f-4d02-b231-9e10ec0e0675 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.819750180Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c889720c-7dfa-4a50-8b2e-4a20f1d914be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.820376063Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866251820352244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c889720c-7dfa-4a50-8b2e-4a20f1d914be name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.820969610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0e3232b0-9dd7-4463-a807-9330e6370851 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.821028106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0e3232b0-9dd7-4463-a807-9330e6370851 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:30:51 ha-240486 crio[3783]: time="2024-08-28 17:30:51.821781831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:125af17f499512876ffc699c59e1c1e0532c93afcb1ea2c27b2e1517888f09fc,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724865977380552519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aaf299429f94cbaecc524ccef007c1684afa3d413c96ee350d5b7b7a7564ae6,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724865948378600787,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60967cc1348fa22f08c1c7531783c9ab4d3fce1260f6f98bafc9bc3a575778c2,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724865943373196389,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410b3c4ea3db4339584cd4eb82d730e9e6e6d49e4e376892b22d470aa6e2076,PodSandboxId:bfd5082b0ae05c270a6f8e67f3a00a3d542697ab92c603e7803aa43927613784,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724865940686214195,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34999fd725a1df8ba5ccd23b0509fe69d01404843249e6fdde331b7f6db0bdf4,PodSandboxId:6582087fafb9ef2c16950f38ff11bb98283e0e074ca6bedc34eb356bcfb23cdd,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1724865920645557636,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71813f46ff394974f25f6692688dea8c,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de2aca740592a4a49eb8bb442f001c1d456905053bf247f1edf977f32b25e433,PodSandboxId:bba2e49f096e9bb4135c922b36606c2c1a2c3cd5f14bb526d65ddcfb5e76ebc1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724865907571011627,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termina
tionGracePeriod: 30,},},&Container{Id:3321ff37258a7c3207ea0532c2614cab4523863990fda035e34b65be3cc5beee,PodSandboxId:33cba4b4c7b39170f08eec8425a5c617ef5a2a9e0df80456d3e19635ac271aeb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724865907557400576,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container
{Id:a672724aec167d527a6f9bdb4cebfb4f860cba338ca1ae57a114f2b14b5f6ce0,PodSandboxId:ca10600dff018bac2cb9158c6bea69620766794d0fc918c255d092c0015526f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907607695574,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.rest
artCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abe13582c483763cedf27ce6bb1c1ac3af981235b1a300df8e4103c77681267f,PodSandboxId:fe66166e98fcd147844afc97babb88eaf39edf5768a68a46cc2672e4b297e13a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724865907392322859,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:083c1edf6582c4c38c688224f753b28df8557830f500994b577421a7b9bc5e50,PodSandboxId:42b0e6e318759642c15cf09e9d516ff75a222362f4b5f5a4256c9048564bb30f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724865907498672676,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 393e4e8ab105af585ab1f9ebd5be80bc,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53addc0306f8956677be2709efd18c12e46a40768c54f02b43e8df3a5a1370a5,PodSandboxId:1f8473f5c912ec84c3a700d48c08906bf151956e76518f674af1457c2417e13d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724865907373862306,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b34b34a42087fc70cd5c8a95ec9171ecf77b41a219483cd24e17b7c48484461,PodSandboxId:ac7692d6be35fbc54be41d23aa08e4a4c0326a3aa807b31bb7cf42ad9597172b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724865907322644056,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-240486,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 5262792087191096d4a2463307ef739d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092b3fd67ccf50f549e9fdc5831e230ef405283936254cd4f930aed6a8da6889,PodSandboxId:df5ad7c301f159540a7d6c0241adcbdb843a6484963339b9dfaf12a812a457ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724865907268385382,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109d
f393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95e4e712d6d3ea37be32d3acca6726f64204d87ebf9a1d92514340845294696,PodSandboxId:0153f5f1c0471faa6468749afbc4f24eb546a3a4230985017b4ecccc93cd475d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724865907096414405,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83a920cf-9505-4ae6-bd10-2582b38ee29b,},Anno
tations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5a3adee066123dc81167d6623b7cb77c85a669a53a9bc3b09df0e92b5a63875,PodSandboxId:23adeed9e41e9b3ab7b1e2d6845c9bb8b4f84c06d4220490fa1f1d7fcce6fccf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724865423382656497,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-tnmmz,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e4608982-afdd-491b-8fdb-ede6a6a4167a,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342,PodSandboxId:375c7b919327cc2d35db09cbf459307441e11d59de7faf5235236b0397752632,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285217770338,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-x562s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78fab040-ae1a-425e-9dc5-e10594b84b9f,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc,PodSandboxId:2efd0861079698c32d3693efd937424637b46e45b4849f08d3813ae57812af04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724865285212512899,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-wtzml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424f87f7-0221-432d-a04f-8f276386be98,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79,PodSandboxId:0d9937bfda982379995864a92ea094362204809176c7de275c9a7a1f3ab14e0e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724865273106340485,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pb8m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67180991-ca3a-4cfb-ba43-919c64d68657,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd,PodSandboxId:762e2586bed2696c68a1fcf033ee4104e1f515b1e7c8b3605380108b41fbcafc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724865269335973269,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jdnzs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c500e4d-bea4-4389-aca7-ebf805f2e642,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096,PodSandboxId:98bba66b20012b59cb22bfdf60bb2a0e92fe5033e4e8c2b35a2c61218a808276,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724865258185225034,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eef3407dfda5c22c64bcead223dfe4f,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594,PodSandboxId:2280901ed00fa194862ffeeca0704423d1feef142f1a92ea5bd19a660c1465b6,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724865258176819966,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-240486,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a055cdc0d382d6b916dd8109df393b3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0e3232b0-9dd7-4463-a807-9330e6370851 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	125af17f49951       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   0153f5f1c0471       storage-provisioner
	8aaf299429f94       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Running             kube-apiserver            3                   42b0e6e318759       kube-apiserver-ha-240486
	60967cc1348fa       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Running             kube-controller-manager   2                   ac7692d6be35f       kube-controller-manager-ha-240486
	0410b3c4ea3db       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   bfd5082b0ae05       busybox-7dff88458-tnmmz
	34999fd725a1d       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   6582087fafb9e       kube-vip-ha-240486
	a672724aec167       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   ca10600dff018       coredns-6f6b679f8f-x562s
	de2aca740592a       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   bba2e49f096e9       kube-proxy-jdnzs
	3321ff37258a7       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   33cba4b4c7b39       kindnet-pb8m7
	083c1edf6582c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   42b0e6e318759       kube-apiserver-ha-240486
	abe13582c4837       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   fe66166e98fcd       kube-scheduler-ha-240486
	53addc0306f89       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   1f8473f5c912e       coredns-6f6b679f8f-wtzml
	9b34b34a42087       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   1                   ac7692d6be35f       kube-controller-manager-ha-240486
	092b3fd67ccf5       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   df5ad7c301f15       etcd-ha-240486
	d95e4e712d6d3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   0153f5f1c0471       storage-provisioner
	d5a3adee06612       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   23adeed9e41e9       busybox-7dff88458-tnmmz
	687020da7d252       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   375c7b919327c       coredns-6f6b679f8f-x562s
	5171fb49fa83b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   2efd086107969       coredns-6f6b679f8f-wtzml
	a200b18d5b49f       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    16 minutes ago      Exited              kindnet-cni               0                   0d9937bfda982       kindnet-pb8m7
	5da7c6652ad91       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      16 minutes ago      Exited              kube-proxy                0                   762e2586bed26       kube-proxy-jdnzs
	1396de2dd1902       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      16 minutes ago      Exited              kube-scheduler            0                   98bba66b20012       kube-scheduler-ha-240486
	6006f9215c80c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      16 minutes ago      Exited              etcd                      0                   2280901ed00fa       etcd-ha-240486
	
	
	==> coredns [5171fb49fa83b80343e14134b828a1b904be348950aed406f1efc9f3233d62bc] <==
	[INFO] 10.244.1.2:42445 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000451167s
	[INFO] 10.244.3.2:36990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118942s
	[INFO] 10.244.3.2:49081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261149s
	[INFO] 10.244.3.2:35420 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000157575s
	[INFO] 10.244.3.2:45145 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000273687s
	[INFO] 10.244.0.4:59568 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001810378s
	[INFO] 10.244.1.2:40640 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000138766s
	[INFO] 10.244.1.2:36403 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155827s
	[INFO] 10.244.1.2:57247 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096044s
	[INFO] 10.244.3.2:58745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021909s
	[INFO] 10.244.3.2:52666 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001012s
	[INFO] 10.244.3.2:55195 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000202518s
	[INFO] 10.244.0.4:50754 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164536s
	[INFO] 10.244.0.4:52876 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113989s
	[INFO] 10.244.1.2:43752 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000149181s
	[INFO] 10.244.1.2:39336 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000272379s
	[INFO] 10.244.1.2:54086 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000180306s
	[INFO] 10.244.1.2:35731 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186612s
	[INFO] 10.244.3.2:38396 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00014603s
	[INFO] 10.244.3.2:37082 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000155781s
	[INFO] 10.244.0.4:42529 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117311s
	[INFO] 10.244.0.4:54981 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000113539s
	[INFO] 10.244.0.4:46325 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065905s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [53addc0306f8956677be2709efd18c12e46a40768c54f02b43e8df3a5a1370a5] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:60024->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:60024->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:60018->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.7:60018->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [687020da7d2523d1c80386c3a88f4bd03fed4357a23f8dc3c09c14d3ebb60342] <==
	[INFO] 10.244.1.2:45832 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227329s
	[INFO] 10.244.1.2:55717 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.010110062s
	[INFO] 10.244.1.2:36777 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189682s
	[INFO] 10.244.1.2:33751 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105145s
	[INFO] 10.244.1.2:34860 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088194s
	[INFO] 10.244.3.2:43474 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001844418s
	[INFO] 10.244.3.2:42113 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123683s
	[INFO] 10.244.3.2:54119 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001316499s
	[INFO] 10.244.3.2:41393 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000061254s
	[INFO] 10.244.0.4:35761 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000174103s
	[INFO] 10.244.0.4:35492 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000135318s
	[INFO] 10.244.0.4:41816 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000037492s
	[INFO] 10.244.0.4:56198 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00165456s
	[INFO] 10.244.0.4:42294 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000034332s
	[INFO] 10.244.0.4:49049 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000062307s
	[INFO] 10.244.0.4:43851 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000033836s
	[INFO] 10.244.1.2:53375 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119804s
	[INFO] 10.244.3.2:50434 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105903s
	[INFO] 10.244.0.4:41203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063169s
	[INFO] 10.244.0.4:51605 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00004099s
	[INFO] 10.244.3.2:53550 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157853s
	[INFO] 10.244.3.2:55570 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000261867s
	[INFO] 10.244.0.4:50195 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000278101s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a672724aec167d527a6f9bdb4cebfb4f860cba338ca1ae57a114f2b14b5f6ce0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: Trace[821465324]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (28-Aug-2024 17:25:08.069) (total time: 17220ms):
	Trace[821465324]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host 17220ms (17:25:25.290)
	Trace[821465324]: [17.220925378s] [17.220925378s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:44568->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.8:44568->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-240486
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T17_14_25_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:14:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:30:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:25:52 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:25:52 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:25:52 +0000   Wed, 28 Aug 2024 17:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:25:52 +0000   Wed, 28 Aug 2024 17:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.227
	  Hostname:    ha-240486
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b73dbe7f63fd4c3baf977a4b53641230
	  System UUID:                b73dbe7f-63fd-4c3b-af97-7a4b53641230
	  Boot ID:                    cb154fe5-0aad-4938-bd54-d2af34922b1d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-tnmmz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-wtzml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-x562s             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-240486                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-pb8m7                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-240486             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-240486    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-jdnzs                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-240486             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-240486                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m1s                   kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-240486 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-240486 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-240486 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           16m                    node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-240486 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Warning  ContainerGCFailed        6m28s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m49s (x3 over 6m38s)  kubelet          Node ha-240486 status is now: NodeNotReady
	  Normal   RegisteredNode           5m2s                   node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal   RegisteredNode           4m59s                  node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-240486 event: Registered Node ha-240486 in Controller
	
	
	Name:               ha-240486-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_15_16_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:15:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:30:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:26:37 +0000   Wed, 28 Aug 2024 17:25:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:26:37 +0000   Wed, 28 Aug 2024 17:25:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:26:37 +0000   Wed, 28 Aug 2024 17:25:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:26:37 +0000   Wed, 28 Aug 2024 17:25:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    ha-240486-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9be8698d6a9a4f2dbc236b4faf8196d2
	  System UUID:                9be8698d-6a9a-4f2d-bc23-6b4faf8196d2
	  Boot ID:                    90d651b1-e0cc-4ce0-b518-997ffe0b527a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5pjcm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-240486-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-q9q9q                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-240486-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-240486-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-4w7tt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-240486-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-240486-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m56s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-240486-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     15m                    cidrAllocator    Node ha-240486-m02 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-240486-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-240486-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-240486-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-240486-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-240486-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-240486-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m2s                   node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           4m59s                  node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-240486-m02 event: Registered Node ha-240486-m02 in Controller
	
	
	Name:               ha-240486-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-240486-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=ha-240486
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_17_35_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:17:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-240486-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:28:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 28 Aug 2024 17:28:05 +0000   Wed, 28 Aug 2024 17:29:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 28 Aug 2024 17:28:05 +0000   Wed, 28 Aug 2024 17:29:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 28 Aug 2024 17:28:05 +0000   Wed, 28 Aug 2024 17:29:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 28 Aug 2024 17:28:05 +0000   Wed, 28 Aug 2024 17:29:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    ha-240486-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2dbc2f47ba234abeb085dbeb264b66eb
	  System UUID:                2dbc2f47-ba23-4abe-b085-dbeb264b66eb
	  Boot ID:                    f9dcd4b8-7f25-4cbf-a4a4-bcae5a976c12
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-xww2j    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-gngl7              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-jlk49           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m42s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   CIDRAssignmentFailed     13m                    cidrAllocator    Node ha-240486-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-240486-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-240486-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-240486-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-240486-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m2s                   node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   RegisteredNode           4m59s                  node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   NodeNotReady             4m22s                  node-controller  Node ha-240486-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m14s                  node-controller  Node ha-240486-m04 event: Registered Node ha-240486-m04 in Controller
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-240486-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-240486-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-240486-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-240486-m04 has been rebooted, boot id: f9dcd4b8-7f25-4cbf-a4a4-bcae5a976c12
	  Normal   NodeReady                2m47s                  kubelet          Node ha-240486-m04 status is now: NodeReady
	  Normal   NodeNotReady             104s                   node-controller  Node ha-240486-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +5.828086] systemd-fstab-generator[587]: Ignoring "noauto" option for root device
	[  +0.054618] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049350] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.168729] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.141709] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.274713] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +3.746562] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +3.326521] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.055344] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.078869] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.096594] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.273661] kauditd_printk_skb: 28 callbacks suppressed
	[ +15.597500] kauditd_printk_skb: 31 callbacks suppressed
	[Aug28 17:15] kauditd_printk_skb: 26 callbacks suppressed
	[Aug28 17:21] kauditd_printk_skb: 1 callbacks suppressed
	[Aug28 17:25] systemd-fstab-generator[3546]: Ignoring "noauto" option for root device
	[  +0.149015] systemd-fstab-generator[3558]: Ignoring "noauto" option for root device
	[  +0.181285] systemd-fstab-generator[3572]: Ignoring "noauto" option for root device
	[  +0.151663] systemd-fstab-generator[3584]: Ignoring "noauto" option for root device
	[  +0.359143] systemd-fstab-generator[3663]: Ignoring "noauto" option for root device
	[  +0.849767] systemd-fstab-generator[3866]: Ignoring "noauto" option for root device
	[  +3.408696] kauditd_printk_skb: 222 callbacks suppressed
	[ +17.949668] kauditd_printk_skb: 1 callbacks suppressed
	[ +22.440144] kauditd_printk_skb: 5 callbacks suppressed
	[  +6.673988] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [092b3fd67ccf50f549e9fdc5831e230ef405283936254cd4f930aed6a8da6889] <==
	{"level":"info","ts":"2024-08-28T17:27:30.268358Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:27:34.050440Z","caller":"traceutil/trace.go:171","msg":"trace[1681486824] transaction","detail":"{read_only:false; response_revision:2469; number_of_response:1; }","duration":"103.92892ms","start":"2024-08-28T17:27:33.946449Z","end":"2024-08-28T17:27:34.050378Z","steps":["trace[1681486824] 'process raft request'  (duration: 92.868093ms)","trace[1681486824] 'compare'  (duration: 10.955746ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T17:27:41.087275Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.018555ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-240486-m03\" ","response":"range_response_count:1 size:3874"}
	{"level":"info","ts":"2024-08-28T17:27:41.087404Z","caller":"traceutil/trace.go:171","msg":"trace[1494469519] range","detail":"{range_begin:/registry/minions/ha-240486-m03; range_end:; response_count:1; response_revision:2506; }","duration":"102.207217ms","start":"2024-08-28T17:27:40.985185Z","end":"2024-08-28T17:27:41.087393Z","steps":["trace[1494469519] 'range keys from in-memory index tree'  (duration: 101.180297ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:28:09.403090Z","caller":"traceutil/trace.go:171","msg":"trace[508714150] transaction","detail":"{read_only:false; response_revision:2603; number_of_response:1; }","duration":"117.778059ms","start":"2024-08-28T17:28:09.285299Z","end":"2024-08-28T17:28:09.403077Z","steps":["trace[508714150] 'process raft request'  (duration: 117.664461ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:28:18.803589Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.28:36310","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-28T17:28:18.814096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bcb2eab2b5d0a9fc switched to configuration voters=(1704953489137870101 13597188278260378108)"}
	{"level":"info","ts":"2024-08-28T17:28:18.816339Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"a9051c714e34311b","local-member-id":"bcb2eab2b5d0a9fc","removed-remote-peer-id":"f1587dbaa7d9fdc3","removed-remote-peer-urls":["https://192.168.39.28:2380"]}
	{"level":"info","ts":"2024-08-28T17:28:18.816466Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"warn","ts":"2024-08-28T17:28:18.816746Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:28:18.816814Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"warn","ts":"2024-08-28T17:28:18.817167Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:28:18.817244Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"warn","ts":"2024-08-28T17:28:18.817306Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"bcb2eab2b5d0a9fc","removed-member-id":"f1587dbaa7d9fdc3"}
	{"level":"warn","ts":"2024-08-28T17:28:18.817368Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-08-28T17:28:18.817587Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"warn","ts":"2024-08-28T17:28:18.817821Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3","error":"context canceled"}
	{"level":"warn","ts":"2024-08-28T17:28:18.817898Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"f1587dbaa7d9fdc3","error":"failed to read f1587dbaa7d9fdc3 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-28T17:28:18.818003Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"warn","ts":"2024-08-28T17:28:18.818182Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3","error":"context canceled"}
	{"level":"info","ts":"2024-08-28T17:28:18.818245Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:28:18.818288Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:28:18.818322Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"bcb2eab2b5d0a9fc","removed-remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"warn","ts":"2024-08-28T17:28:18.826206Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id-stream-handler":"bcb2eab2b5d0a9fc","remote-peer-id-from":"f1587dbaa7d9fdc3"}
	{"level":"warn","ts":"2024-08-28T17:28:18.829387Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id-stream-handler":"bcb2eab2b5d0a9fc","remote-peer-id-from":"f1587dbaa7d9fdc3"}
	
	
	==> etcd [6006f9215c80cf2768b25d97f294ba3b036bb452a87ac3054954c7a949d9c594] <==
	{"level":"warn","ts":"2024-08-28T17:23:31.485846Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T17:23:24.217550Z","time spent":"7.266994739s","remote":"127.0.0.1:41610","response type":"/etcdserverpb.KV/Range","request count":0,"request size":83,"response count":0,"response size":0,"request content":"key:\"/registry/validatingadmissionpolicies/\" range_end:\"/registry/validatingadmissionpolicies0\" limit:500 "}
	2024/08/28 17:23:31 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-28T17:23:31.548052Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.227:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-28T17:23:31.548136Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.227:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-28T17:23:31.548228Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"bcb2eab2b5d0a9fc","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-28T17:23:31.548380Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548420Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548474Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548597Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548654Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548722Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548752Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"17a9362a46a02515"}
	{"level":"info","ts":"2024-08-28T17:23:31.548775Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.548812Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.548883Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.549049Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.549123Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.549218Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"bcb2eab2b5d0a9fc","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.549253Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"f1587dbaa7d9fdc3"}
	{"level":"info","ts":"2024-08-28T17:23:31.551239Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.227:2380"}
	{"level":"warn","ts":"2024-08-28T17:23:31.551271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.838826413s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-28T17:23:31.551312Z","caller":"traceutil/trace.go:171","msg":"trace[857817843] range","detail":"{range_begin:; range_end:; }","duration":"1.838884074s","start":"2024-08-28T17:23:29.712420Z","end":"2024-08-28T17:23:31.551304Z","steps":["trace[857817843] 'agreement among raft nodes before linearized reading'  (duration: 1.838824814s)"],"step_count":1}
	{"level":"error","ts":"2024-08-28T17:23:31.551344Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n[+]data_corruption ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-28T17:23:31.551453Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.227:2380"}
	{"level":"info","ts":"2024-08-28T17:23:31.551716Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-240486","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.227:2380"],"advertise-client-urls":["https://192.168.39.227:2379"]}
	
	
	==> kernel <==
	 17:30:52 up 17 min,  0 users,  load average: 0.37, 0.61, 0.42
	Linux ha-240486 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3321ff37258a7c3207ea0532c2614cab4523863990fda035e34b65be3cc5beee] <==
	I0828 17:30:08.677545       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:30:18.677507       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:30:18.677573       1 main.go:299] handling current node
	I0828 17:30:18.677600       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:30:18.677606       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:30:18.677780       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:30:18.677796       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:30:28.685093       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:30:28.685170       1 main.go:299] handling current node
	I0828 17:30:28.685193       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:30:28.685201       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:30:28.685369       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:30:28.685401       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:30:38.681576       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:30:38.681630       1 main.go:299] handling current node
	I0828 17:30:38.681651       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:30:38.681662       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:30:38.681830       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:30:38.681851       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:30:48.677645       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:30:48.677712       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:30:48.678034       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:30:48.678059       1 main.go:299] handling current node
	I0828 17:30:48.678079       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:30:48.678084       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [a200b18d5b49f8e666538089074eb742a74b52685198a435f9cf6bb8cb129b79] <==
	I0828 17:22:54.032843       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:23:04.033255       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:23:04.033301       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:23:04.033484       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:23:04.033508       1 main.go:299] handling current node
	I0828 17:23:04.033523       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:23:04.033529       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:23:04.033592       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:23:04.033609       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:23:14.040003       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:23:14.040048       1 main.go:299] handling current node
	I0828 17:23:14.040063       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:23:14.040090       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:23:14.040260       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:23:14.040296       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	I0828 17:23:14.040358       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:23:14.040376       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:23:24.037655       1 main.go:295] Handling node with IPs: map[192.168.39.125:{}]
	I0828 17:23:24.037699       1 main.go:322] Node ha-240486-m04 has CIDR [10.244.4.0/24] 
	I0828 17:23:24.037854       1 main.go:295] Handling node with IPs: map[192.168.39.227:{}]
	I0828 17:23:24.037873       1 main.go:299] handling current node
	I0828 17:23:24.037885       1 main.go:295] Handling node with IPs: map[192.168.39.103:{}]
	I0828 17:23:24.037890       1 main.go:322] Node ha-240486-m02 has CIDR [10.244.1.0/24] 
	I0828 17:23:24.038004       1 main.go:295] Handling node with IPs: map[192.168.39.28:{}]
	I0828 17:23:24.038022       1 main.go:322] Node ha-240486-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [083c1edf6582c4c38c688224f753b28df8557830f500994b577421a7b9bc5e50] <==
	I0828 17:25:08.097050       1 options.go:228] external host was not specified, using 192.168.39.227
	I0828 17:25:08.103278       1 server.go:142] Version: v1.31.0
	I0828 17:25:08.103326       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:25:08.487285       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0828 17:25:08.513346       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0828 17:25:08.517218       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0828 17:25:08.517431       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0828 17:25:08.517684       1 instance.go:232] Using reconciler: lease
	W0828 17:25:28.484744       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0828 17:25:28.485089       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0828 17:25:28.518880       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0828 17:25:28.518993       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [8aaf299429f94cbaecc524ccef007c1684afa3d413c96ee350d5b7b7a7564ae6] <==
	I0828 17:25:50.290552       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0828 17:25:50.364658       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0828 17:25:50.364784       1 policy_source.go:224] refreshing policies
	I0828 17:25:50.366856       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0828 17:25:50.366911       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0828 17:25:50.368196       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0828 17:25:50.368288       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0828 17:25:50.368460       1 shared_informer.go:320] Caches are synced for configmaps
	I0828 17:25:50.372255       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0828 17:25:50.373450       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0828 17:25:50.373571       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0828 17:25:50.393403       1 aggregator.go:171] initial CRD sync complete...
	I0828 17:25:50.393515       1 autoregister_controller.go:144] Starting autoregister controller
	I0828 17:25:50.393546       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0828 17:25:50.393571       1 cache.go:39] Caches are synced for autoregister controller
	I0828 17:25:50.408036       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0828 17:25:50.413188       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	W0828 17:25:50.433453       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103 192.168.39.28]
	I0828 17:25:50.437354       1 controller.go:615] quota admission added evaluator for: endpoints
	I0828 17:25:50.446902       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0828 17:25:50.451199       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0828 17:25:50.455527       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0828 17:25:51.287645       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0828 17:25:51.977809       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103 192.168.39.227 192.168.39.28]
	W0828 17:26:01.970405       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.103 192.168.39.227]
	
	
	==> kube-controller-manager [60967cc1348fa22f08c1c7531783c9ab4d3fce1260f6f98bafc9bc3a575778c2] <==
	I0828 17:28:17.600609       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.389µs"
	I0828 17:28:17.805816       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="98.969µs"
	I0828 17:28:17.811515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.409µs"
	I0828 17:28:19.873172       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.969937ms"
	I0828 17:28:19.874114       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.788µs"
	I0828 17:28:29.616456       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-240486-m04"
	I0828 17:28:29.616701       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m03"
	E0828 17:28:29.668530       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-240486-m03\", UID:\"29d7fe6d-1025-41fc-a513-55292711cb30\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-240486-m03\", UID:\"836273b8-35d9-40b2-a10a-0b3ff858b51a\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-240486-m03\" not found" logger="UnhandledError"
	E0828 17:28:33.748760       1 gc_controller.go:151] "Failed to get node" err="node \"ha-240486-m03\" not found" logger="pod-garbage-collector-controller" node="ha-240486-m03"
	E0828 17:28:33.748862       1 gc_controller.go:151] "Failed to get node" err="node \"ha-240486-m03\" not found" logger="pod-garbage-collector-controller" node="ha-240486-m03"
	E0828 17:28:33.748890       1 gc_controller.go:151] "Failed to get node" err="node \"ha-240486-m03\" not found" logger="pod-garbage-collector-controller" node="ha-240486-m03"
	E0828 17:28:33.748954       1 gc_controller.go:151] "Failed to get node" err="node \"ha-240486-m03\" not found" logger="pod-garbage-collector-controller" node="ha-240486-m03"
	E0828 17:28:33.748982       1 gc_controller.go:151] "Failed to get node" err="node \"ha-240486-m03\" not found" logger="pod-garbage-collector-controller" node="ha-240486-m03"
	E0828 17:28:53.749674       1 gc_controller.go:151] "Failed to get node" err="node \"ha-240486-m03\" not found" logger="pod-garbage-collector-controller" node="ha-240486-m03"
	E0828 17:28:53.749727       1 gc_controller.go:151] "Failed to get node" err="node \"ha-240486-m03\" not found" logger="pod-garbage-collector-controller" node="ha-240486-m03"
	E0828 17:28:53.749739       1 gc_controller.go:151] "Failed to get node" err="node \"ha-240486-m03\" not found" logger="pod-garbage-collector-controller" node="ha-240486-m03"
	E0828 17:28:53.749744       1 gc_controller.go:151] "Failed to get node" err="node \"ha-240486-m03\" not found" logger="pod-garbage-collector-controller" node="ha-240486-m03"
	E0828 17:28:53.749758       1 gc_controller.go:151] "Failed to get node" err="node \"ha-240486-m03\" not found" logger="pod-garbage-collector-controller" node="ha-240486-m03"
	I0828 17:29:08.997985       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:29:09.019320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:29:09.050697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.445691ms"
	I0828 17:29:09.051055       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="226.968µs"
	I0828 17:29:09.146333       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:29:10.582561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	I0828 17:29:14.129561       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-240486-m04"
	
	
	==> kube-controller-manager [9b34b34a42087fc70cd5c8a95ec9171ecf77b41a219483cd24e17b7c48484461] <==
	I0828 17:25:08.407376       1 serving.go:386] Generated self-signed cert in-memory
	I0828 17:25:09.124272       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0828 17:25:09.124356       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:25:09.126194       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0828 17:25:09.126411       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0828 17:25:09.126992       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0828 17:25:09.127693       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0828 17:25:29.525542       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.227:8443/healthz\": dial tcp 192.168.39.227:8443: connect: connection refused"
	
	
	==> kube-proxy [5da7c6652ad91cd36c9463e58d750df92db1ae941d81134d72f6c76004140dfd] <==
	E0828 17:22:19.306147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:22.377642       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:22.377854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:22.378209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:22.378366       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:22.379483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:22.379656       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:28.522029       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:28.522207       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:28.522600       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:28.522654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:28.523121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:28.523252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:40.810622       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:40.811185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:40.811283       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:40.811335       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:40.810949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:40.811438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:59.242826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:59.243057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1848\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:22:59.243275       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:22:59.243317       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1743\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0828 17:23:05.386768       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758": dial tcp 192.168.39.254:8443: connect: no route to host
	E0828 17:23:05.386973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-240486&resourceVersion=1758\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-proxy [de2aca740592a4a49eb8bb442f001c1d456905053bf247f1edf977f32b25e433] <==
	E0828 17:25:11.338059       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-240486\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0828 17:25:14.410703       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-240486\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0828 17:25:17.481971       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-240486\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0828 17:25:23.625668       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-240486\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0828 17:25:32.841519       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-240486\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0828 17:25:50.706254       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.227"]
	E0828 17:25:50.706425       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 17:25:50.742014       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 17:25:50.742110       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 17:25:50.742167       1 server_linux.go:169] "Using iptables Proxier"
	I0828 17:25:50.744460       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 17:25:50.744814       1 server.go:483] "Version info" version="v1.31.0"
	I0828 17:25:50.744873       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:25:50.746666       1 config.go:197] "Starting service config controller"
	I0828 17:25:50.746743       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 17:25:50.746797       1 config.go:104] "Starting endpoint slice config controller"
	I0828 17:25:50.746827       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 17:25:50.749056       1 config.go:326] "Starting node config controller"
	I0828 17:25:50.749143       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 17:25:50.847794       1 shared_informer.go:320] Caches are synced for service config
	I0828 17:25:50.847999       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 17:25:50.850138       1 shared_informer.go:320] Caches are synced for node config
	W0828 17:29:23.771409       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0828 17:29:23.771868       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0828 17:29:23.772063       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-scheduler [1396de2dd19028a404b48412b56e7988ad428c70fc2ee313ae1f2a0a2b2b7096] <==
	E0828 17:16:59.428003       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-5pjcm\": pod busybox-7dff88458-5pjcm is already assigned to node \"ha-240486-m02\"" pod="default/busybox-7dff88458-5pjcm"
	I0828 17:16:59.428093       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5pjcm" node="ha-240486-m02"
	E0828 17:16:59.424465       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-tnmmz\": pod busybox-7dff88458-tnmmz is already assigned to node \"ha-240486\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-tnmmz" node="ha-240486-m02"
	E0828 17:16:59.428536       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e4608982-afdd-491b-8fdb-ede6a6a4167a(default/busybox-7dff88458-tnmmz) was assumed on ha-240486-m02 but assigned to ha-240486" pod="default/busybox-7dff88458-tnmmz"
	E0828 17:16:59.428571       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-tnmmz\": pod busybox-7dff88458-tnmmz is already assigned to node \"ha-240486\"" pod="default/busybox-7dff88458-tnmmz"
	I0828 17:16:59.428617       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-tnmmz" node="ha-240486"
	E0828 17:23:19.637840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0828 17:23:20.679695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0828 17:23:20.862024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0828 17:23:21.165042       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0828 17:23:21.527404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0828 17:23:21.876782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0828 17:23:22.917423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0828 17:23:23.486605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	E0828 17:23:24.741091       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0828 17:23:25.986105       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0828 17:23:26.609571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0828 17:23:26.654407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0828 17:23:27.189505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0828 17:23:27.319634       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0828 17:23:28.269205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	I0828 17:23:31.434527       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0828 17:23:31.451351       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 17:23:31.450902       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0828 17:23:31.469021       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [abe13582c483763cedf27ce6bb1c1ac3af981235b1a300df8e4103c77681267f] <==
	E0828 17:25:45.556135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.227:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:45.886163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.227:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:45.886680       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get \"https://192.168.39.227:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:46.398464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.227:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:46.398505       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.227:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:46.467826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.227:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:46.467885       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.227:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:46.823431       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.227:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:46.823503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.227:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:46.927758       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:46.927848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:48.248211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.227:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:48.248306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.227:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:48.431170       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.227:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.227:8443: connect: connection refused
	E0828 17:25:48.431209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.227:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.227:8443: connect: connection refused" logger="UnhandledError"
	W0828 17:25:50.297495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 17:25:50.298298       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:25:50.298480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0828 17:25:50.298529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:25:50.310619       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 17:25:50.310749       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0828 17:26:05.934175       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0828 17:28:15.532342       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xww2j\": pod busybox-7dff88458-xww2j is already assigned to node \"ha-240486-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-xww2j" node="ha-240486-m04"
	E0828 17:28:15.532830       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-xww2j\": pod busybox-7dff88458-xww2j is already assigned to node \"ha-240486-m04\"" pod="default/busybox-7dff88458-xww2j"
	I0828 17:28:15.532982       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-xww2j" node="ha-240486-m04"
	
	
	==> kubelet <==
	Aug 28 17:29:24 ha-240486 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:29:24 ha-240486 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:29:24 ha-240486 kubelet[1308]: E0828 17:29:24.611569    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866164611023778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:29:24 ha-240486 kubelet[1308]: E0828 17:29:24.611605    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866164611023778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:29:34 ha-240486 kubelet[1308]: E0828 17:29:34.613666    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866174613267646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:29:34 ha-240486 kubelet[1308]: E0828 17:29:34.613703    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866174613267646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:29:44 ha-240486 kubelet[1308]: E0828 17:29:44.621989    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866184621676440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:29:44 ha-240486 kubelet[1308]: E0828 17:29:44.622021    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866184621676440,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:29:54 ha-240486 kubelet[1308]: E0828 17:29:54.624603    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866194624220680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:29:54 ha-240486 kubelet[1308]: E0828 17:29:54.625028    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866194624220680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:30:04 ha-240486 kubelet[1308]: E0828 17:30:04.626520    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866204626067524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:30:04 ha-240486 kubelet[1308]: E0828 17:30:04.626553    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866204626067524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:30:14 ha-240486 kubelet[1308]: E0828 17:30:14.628880    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866214628164691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:30:14 ha-240486 kubelet[1308]: E0828 17:30:14.628958    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866214628164691,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:30:24 ha-240486 kubelet[1308]: E0828 17:30:24.386987    1308 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 17:30:24 ha-240486 kubelet[1308]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 17:30:24 ha-240486 kubelet[1308]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 17:30:24 ha-240486 kubelet[1308]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:30:24 ha-240486 kubelet[1308]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:30:24 ha-240486 kubelet[1308]: E0828 17:30:24.636637    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866224635620298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:30:24 ha-240486 kubelet[1308]: E0828 17:30:24.636675    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866224635620298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:30:34 ha-240486 kubelet[1308]: E0828 17:30:34.638148    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866234637825970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:30:34 ha-240486 kubelet[1308]: E0828 17:30:34.638192    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866234637825970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:30:44 ha-240486 kubelet[1308]: E0828 17:30:44.641627    1308 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866244641184452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:30:44 ha-240486 kubelet[1308]: E0828 17:30:44.641679    1308 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724866244641184452,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 17:30:51.409904   38300 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19529-10317/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-240486 -n ha-240486
helpers_test.go:261: (dbg) Run:  kubectl --context ha-240486 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (324.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-168922
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-168922
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-168922: exit status 82 (2m1.739041358s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-168922-m03"  ...
	* Stopping node "multinode-168922-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-168922" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-168922 --wait=true -v=8 --alsologtostderr
E0828 17:47:26.592514   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:48:00.240483   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:49:23.523630   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-168922 --wait=true -v=8 --alsologtostderr: (3m21.091427663s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-168922
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-168922 -n multinode-168922
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-168922 logs -n 25: (1.37525714s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m02:/home/docker/cp-test.txt                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1181089229/001/cp-test_multinode-168922-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m02:/home/docker/cp-test.txt                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922:/home/docker/cp-test_multinode-168922-m02_multinode-168922.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n multinode-168922 sudo cat                                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /home/docker/cp-test_multinode-168922-m02_multinode-168922.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m02:/home/docker/cp-test.txt                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03:/home/docker/cp-test_multinode-168922-m02_multinode-168922-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n multinode-168922-m03 sudo cat                                   | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /home/docker/cp-test_multinode-168922-m02_multinode-168922-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp testdata/cp-test.txt                                                | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m03:/home/docker/cp-test.txt                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1181089229/001/cp-test_multinode-168922-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m03:/home/docker/cp-test.txt                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922:/home/docker/cp-test_multinode-168922-m03_multinode-168922.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n multinode-168922 sudo cat                                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /home/docker/cp-test_multinode-168922-m03_multinode-168922.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m03:/home/docker/cp-test.txt                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m02:/home/docker/cp-test_multinode-168922-m03_multinode-168922-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n multinode-168922-m02 sudo cat                                   | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /home/docker/cp-test_multinode-168922-m03_multinode-168922-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-168922 node stop m03                                                          | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	| node    | multinode-168922 node start                                                             | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-168922                                                                | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC |                     |
	| stop    | -p multinode-168922                                                                     | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC |                     |
	| start   | -p multinode-168922                                                                     | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:46 UTC | 28 Aug 24 17:50 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-168922                                                                | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:50 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 17:46:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 17:46:58.436940   47471 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:46:58.437053   47471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:46:58.437064   47471 out.go:358] Setting ErrFile to fd 2...
	I0828 17:46:58.437070   47471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:46:58.437265   47471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:46:58.437814   47471 out.go:352] Setting JSON to false
	I0828 17:46:58.438802   47471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5364,"bootTime":1724861854,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 17:46:58.438858   47471 start.go:139] virtualization: kvm guest
	I0828 17:46:58.441284   47471 out.go:177] * [multinode-168922] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 17:46:58.442634   47471 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:46:58.442634   47471 notify.go:220] Checking for updates...
	I0828 17:46:58.445043   47471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:46:58.446463   47471 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:46:58.447673   47471 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:46:58.448936   47471 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 17:46:58.450380   47471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:46:58.452139   47471 config.go:182] Loaded profile config "multinode-168922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:46:58.452245   47471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:46:58.452680   47471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:46:58.452728   47471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:46:58.468092   47471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38929
	I0828 17:46:58.468575   47471 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:46:58.469125   47471 main.go:141] libmachine: Using API Version  1
	I0828 17:46:58.469145   47471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:46:58.469474   47471 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:46:58.469657   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:46:58.506044   47471 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 17:46:58.507476   47471 start.go:297] selected driver: kvm2
	I0828 17:46:58.507496   47471 start.go:901] validating driver "kvm2" against &{Name:multinode-168922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-168922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:46:58.507662   47471 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:46:58.507999   47471 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:46:58.508085   47471 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 17:46:58.523919   47471 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 17:46:58.524766   47471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:46:58.524846   47471 cni.go:84] Creating CNI manager for ""
	I0828 17:46:58.524859   47471 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0828 17:46:58.524920   47471 start.go:340] cluster config:
	{Name:multinode-168922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-168922 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:46:58.525063   47471 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:46:58.526976   47471 out.go:177] * Starting "multinode-168922" primary control-plane node in "multinode-168922" cluster
	I0828 17:46:58.528229   47471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:46:58.528272   47471 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 17:46:58.528283   47471 cache.go:56] Caching tarball of preloaded images
	I0828 17:46:58.528399   47471 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 17:46:58.528413   47471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 17:46:58.528536   47471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/config.json ...
	I0828 17:46:58.528753   47471 start.go:360] acquireMachinesLock for multinode-168922: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 17:46:58.528799   47471 start.go:364] duration metric: took 25.856µs to acquireMachinesLock for "multinode-168922"
	I0828 17:46:58.528815   47471 start.go:96] Skipping create...Using existing machine configuration
	I0828 17:46:58.528821   47471 fix.go:54] fixHost starting: 
	I0828 17:46:58.529099   47471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:46:58.529129   47471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:46:58.543997   47471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45985
	I0828 17:46:58.544366   47471 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:46:58.544847   47471 main.go:141] libmachine: Using API Version  1
	I0828 17:46:58.544866   47471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:46:58.545188   47471 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:46:58.545468   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:46:58.545645   47471 main.go:141] libmachine: (multinode-168922) Calling .GetState
	I0828 17:46:58.547156   47471 fix.go:112] recreateIfNeeded on multinode-168922: state=Running err=<nil>
	W0828 17:46:58.547179   47471 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 17:46:58.549019   47471 out.go:177] * Updating the running kvm2 "multinode-168922" VM ...
	I0828 17:46:58.550163   47471 machine.go:93] provisionDockerMachine start ...
	I0828 17:46:58.550185   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:46:58.550409   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:58.552808   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.553164   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:58.553190   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.553335   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:46:58.553530   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.553677   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.553860   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:46:58.554028   47471 main.go:141] libmachine: Using SSH client type: native
	I0828 17:46:58.554353   47471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0828 17:46:58.554372   47471 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 17:46:58.658819   47471 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-168922
	
	I0828 17:46:58.658858   47471 main.go:141] libmachine: (multinode-168922) Calling .GetMachineName
	I0828 17:46:58.659107   47471 buildroot.go:166] provisioning hostname "multinode-168922"
	I0828 17:46:58.659131   47471 main.go:141] libmachine: (multinode-168922) Calling .GetMachineName
	I0828 17:46:58.659334   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:58.661702   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.662122   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:58.662152   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.662296   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:46:58.662472   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.662623   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.662749   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:46:58.662943   47471 main.go:141] libmachine: Using SSH client type: native
	I0828 17:46:58.663111   47471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0828 17:46:58.663123   47471 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-168922 && echo "multinode-168922" | sudo tee /etc/hostname
	I0828 17:46:58.786299   47471 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-168922
	
	I0828 17:46:58.786327   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:58.789197   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.789591   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:58.789613   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.789832   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:46:58.790017   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.790161   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.790286   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:46:58.790457   47471 main.go:141] libmachine: Using SSH client type: native
	I0828 17:46:58.790679   47471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0828 17:46:58.790696   47471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-168922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-168922/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-168922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:46:58.891247   47471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:46:58.891274   47471 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 17:46:58.891293   47471 buildroot.go:174] setting up certificates
	I0828 17:46:58.891304   47471 provision.go:84] configureAuth start
	I0828 17:46:58.891356   47471 main.go:141] libmachine: (multinode-168922) Calling .GetMachineName
	I0828 17:46:58.891664   47471 main.go:141] libmachine: (multinode-168922) Calling .GetIP
	I0828 17:46:58.894492   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.894980   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:58.895004   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.895082   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:58.897094   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.897481   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:58.897520   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.897673   47471 provision.go:143] copyHostCerts
	I0828 17:46:58.897701   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:46:58.897730   47471 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 17:46:58.897748   47471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:46:58.897818   47471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 17:46:58.897903   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:46:58.897925   47471 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 17:46:58.897932   47471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:46:58.897955   47471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 17:46:58.898008   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:46:58.898024   47471 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 17:46:58.898030   47471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:46:58.898050   47471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 17:46:58.898141   47471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.multinode-168922 san=[127.0.0.1 192.168.39.123 localhost minikube multinode-168922]
	I0828 17:46:59.098676   47471 provision.go:177] copyRemoteCerts
	I0828 17:46:59.098728   47471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:46:59.098750   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:59.101032   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:59.101386   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:59.101415   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:59.101560   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:46:59.101740   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:59.101864   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:46:59.102029   47471 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/multinode-168922/id_rsa Username:docker}
	I0828 17:46:59.181462   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0828 17:46:59.181550   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0828 17:46:59.205267   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0828 17:46:59.205335   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 17:46:59.229092   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0828 17:46:59.229182   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 17:46:59.253150   47471 provision.go:87] duration metric: took 361.83317ms to configureAuth
	I0828 17:46:59.253181   47471 buildroot.go:189] setting minikube options for container-runtime
	I0828 17:46:59.253432   47471 config.go:182] Loaded profile config "multinode-168922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:46:59.253523   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:59.256474   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:59.256858   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:59.256892   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:59.257099   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:46:59.257304   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:59.257504   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:59.257673   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:46:59.257876   47471 main.go:141] libmachine: Using SSH client type: native
	I0828 17:46:59.258034   47471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0828 17:46:59.258049   47471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 17:48:30.119583   47471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 17:48:30.119616   47471 machine.go:96] duration metric: took 1m31.569436547s to provisionDockerMachine
	I0828 17:48:30.119642   47471 start.go:293] postStartSetup for "multinode-168922" (driver="kvm2")
	I0828 17:48:30.119656   47471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:48:30.119679   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:48:30.120064   47471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:48:30.120103   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:48:30.123818   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.124216   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:30.124269   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.124438   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:48:30.124608   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:48:30.124762   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:48:30.124868   47471 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/multinode-168922/id_rsa Username:docker}
	I0828 17:48:30.205881   47471 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:48:30.210062   47471 command_runner.go:130] > NAME=Buildroot
	I0828 17:48:30.210094   47471 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0828 17:48:30.210102   47471 command_runner.go:130] > ID=buildroot
	I0828 17:48:30.210109   47471 command_runner.go:130] > VERSION_ID=2023.02.9
	I0828 17:48:30.210115   47471 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0828 17:48:30.210165   47471 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 17:48:30.210179   47471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 17:48:30.210235   47471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 17:48:30.210321   47471 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 17:48:30.210343   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /etc/ssl/certs/175282.pem
	I0828 17:48:30.210429   47471 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 17:48:30.219637   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:48:30.244168   47471 start.go:296] duration metric: took 124.510825ms for postStartSetup
	I0828 17:48:30.244219   47471 fix.go:56] duration metric: took 1m31.715396636s for fixHost
	I0828 17:48:30.244246   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:48:30.246925   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.247344   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:30.247371   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.247465   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:48:30.247666   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:48:30.247809   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:48:30.247918   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:48:30.248079   47471 main.go:141] libmachine: Using SSH client type: native
	I0828 17:48:30.248253   47471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0828 17:48:30.248265   47471 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 17:48:30.350880   47471 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724867310.314524877
	
	I0828 17:48:30.350904   47471 fix.go:216] guest clock: 1724867310.314524877
	I0828 17:48:30.350912   47471 fix.go:229] Guest: 2024-08-28 17:48:30.314524877 +0000 UTC Remote: 2024-08-28 17:48:30.244224825 +0000 UTC m=+91.841449922 (delta=70.300052ms)
	I0828 17:48:30.350930   47471 fix.go:200] guest clock delta is within tolerance: 70.300052ms
	I0828 17:48:30.350934   47471 start.go:83] releasing machines lock for "multinode-168922", held for 1m31.822124715s
	I0828 17:48:30.350968   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:48:30.351266   47471 main.go:141] libmachine: (multinode-168922) Calling .GetIP
	I0828 17:48:30.353955   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.354320   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:30.354351   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.354440   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:48:30.354945   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:48:30.355112   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:48:30.355235   47471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:48:30.355294   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:48:30.355321   47471 ssh_runner.go:195] Run: cat /version.json
	I0828 17:48:30.355343   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:48:30.357796   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.357980   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.358157   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:30.358183   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.358349   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:48:30.358410   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:30.358439   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.358546   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:48:30.358594   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:48:30.358743   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:48:30.358775   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:48:30.358917   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:48:30.358951   47471 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/multinode-168922/id_rsa Username:docker}
	I0828 17:48:30.359018   47471 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/multinode-168922/id_rsa Username:docker}
	I0828 17:48:30.434779   47471 command_runner.go:130] > {"iso_version": "v1.33.1-1724775098-19521", "kicbase_version": "v0.0.44-1724667927-19511", "minikube_version": "v1.33.1", "commit": "0d49494423856821e9b08161b42ba19c667a6f89"}
	I0828 17:48:30.435116   47471 ssh_runner.go:195] Run: systemctl --version
	I0828 17:48:30.470008   47471 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0828 17:48:30.470154   47471 command_runner.go:130] > systemd 252 (252)
	I0828 17:48:30.470244   47471 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0828 17:48:30.470317   47471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 17:48:30.631308   47471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0828 17:48:30.637090   47471 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0828 17:48:30.637157   47471 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 17:48:30.637217   47471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:48:30.646934   47471 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0828 17:48:30.646959   47471 start.go:495] detecting cgroup driver to use...
	I0828 17:48:30.647017   47471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 17:48:30.665907   47471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 17:48:30.680979   47471 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:48:30.681035   47471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:48:30.695635   47471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:48:30.710502   47471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:48:30.863556   47471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:48:31.004175   47471 docker.go:233] disabling docker service ...
	I0828 17:48:31.004251   47471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:48:31.019857   47471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:48:31.033094   47471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:48:31.170504   47471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:48:31.306022   47471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:48:31.319788   47471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:48:31.338173   47471 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0828 17:48:31.338214   47471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 17:48:31.338275   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.348599   47471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 17:48:31.348664   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.358530   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.368135   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.378221   47471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:48:31.388175   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.397767   47471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.408337   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.417974   47471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:48:31.426446   47471 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0828 17:48:31.426687   47471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:48:31.435675   47471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:48:31.578667   47471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 17:48:31.770169   47471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 17:48:31.770240   47471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 17:48:31.774558   47471 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0828 17:48:31.774582   47471 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0828 17:48:31.774589   47471 command_runner.go:130] > Device: 0,22	Inode: 1334        Links: 1
	I0828 17:48:31.774596   47471 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0828 17:48:31.774602   47471 command_runner.go:130] > Access: 2024-08-28 17:48:31.630924110 +0000
	I0828 17:48:31.774612   47471 command_runner.go:130] > Modify: 2024-08-28 17:48:31.630924110 +0000
	I0828 17:48:31.774620   47471 command_runner.go:130] > Change: 2024-08-28 17:48:31.630924110 +0000
	I0828 17:48:31.774627   47471 command_runner.go:130] >  Birth: -
	I0828 17:48:31.774663   47471 start.go:563] Will wait 60s for crictl version
	I0828 17:48:31.774726   47471 ssh_runner.go:195] Run: which crictl
	I0828 17:48:31.778179   47471 command_runner.go:130] > /usr/bin/crictl
	I0828 17:48:31.778238   47471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:48:31.815871   47471 command_runner.go:130] > Version:  0.1.0
	I0828 17:48:31.815890   47471 command_runner.go:130] > RuntimeName:  cri-o
	I0828 17:48:31.815895   47471 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0828 17:48:31.815900   47471 command_runner.go:130] > RuntimeApiVersion:  v1
	I0828 17:48:31.815984   47471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 17:48:31.816044   47471 ssh_runner.go:195] Run: crio --version
	I0828 17:48:31.841405   47471 command_runner.go:130] > crio version 1.29.1
	I0828 17:48:31.841431   47471 command_runner.go:130] > Version:        1.29.1
	I0828 17:48:31.841441   47471 command_runner.go:130] > GitCommit:      unknown
	I0828 17:48:31.841448   47471 command_runner.go:130] > GitCommitDate:  unknown
	I0828 17:48:31.841454   47471 command_runner.go:130] > GitTreeState:   clean
	I0828 17:48:31.841463   47471 command_runner.go:130] > BuildDate:      2024-08-27T21:29:17Z
	I0828 17:48:31.841468   47471 command_runner.go:130] > GoVersion:      go1.21.6
	I0828 17:48:31.841473   47471 command_runner.go:130] > Compiler:       gc
	I0828 17:48:31.841477   47471 command_runner.go:130] > Platform:       linux/amd64
	I0828 17:48:31.841482   47471 command_runner.go:130] > Linkmode:       dynamic
	I0828 17:48:31.841487   47471 command_runner.go:130] > BuildTags:      
	I0828 17:48:31.841491   47471 command_runner.go:130] >   containers_image_ostree_stub
	I0828 17:48:31.841496   47471 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0828 17:48:31.841503   47471 command_runner.go:130] >   btrfs_noversion
	I0828 17:48:31.841509   47471 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0828 17:48:31.841513   47471 command_runner.go:130] >   libdm_no_deferred_remove
	I0828 17:48:31.841517   47471 command_runner.go:130] >   seccomp
	I0828 17:48:31.841521   47471 command_runner.go:130] > LDFlags:          unknown
	I0828 17:48:31.841527   47471 command_runner.go:130] > SeccompEnabled:   true
	I0828 17:48:31.841531   47471 command_runner.go:130] > AppArmorEnabled:  false
	I0828 17:48:31.842844   47471 ssh_runner.go:195] Run: crio --version
	I0828 17:48:31.869381   47471 command_runner.go:130] > crio version 1.29.1
	I0828 17:48:31.869406   47471 command_runner.go:130] > Version:        1.29.1
	I0828 17:48:31.869414   47471 command_runner.go:130] > GitCommit:      unknown
	I0828 17:48:31.869420   47471 command_runner.go:130] > GitCommitDate:  unknown
	I0828 17:48:31.869427   47471 command_runner.go:130] > GitTreeState:   clean
	I0828 17:48:31.869439   47471 command_runner.go:130] > BuildDate:      2024-08-27T21:29:17Z
	I0828 17:48:31.869446   47471 command_runner.go:130] > GoVersion:      go1.21.6
	I0828 17:48:31.869452   47471 command_runner.go:130] > Compiler:       gc
	I0828 17:48:31.869460   47471 command_runner.go:130] > Platform:       linux/amd64
	I0828 17:48:31.869467   47471 command_runner.go:130] > Linkmode:       dynamic
	I0828 17:48:31.869479   47471 command_runner.go:130] > BuildTags:      
	I0828 17:48:31.869486   47471 command_runner.go:130] >   containers_image_ostree_stub
	I0828 17:48:31.869494   47471 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0828 17:48:31.869503   47471 command_runner.go:130] >   btrfs_noversion
	I0828 17:48:31.869510   47471 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0828 17:48:31.869515   47471 command_runner.go:130] >   libdm_no_deferred_remove
	I0828 17:48:31.869519   47471 command_runner.go:130] >   seccomp
	I0828 17:48:31.869524   47471 command_runner.go:130] > LDFlags:          unknown
	I0828 17:48:31.869528   47471 command_runner.go:130] > SeccompEnabled:   true
	I0828 17:48:31.869532   47471 command_runner.go:130] > AppArmorEnabled:  false
	I0828 17:48:31.871479   47471 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 17:48:31.872854   47471 main.go:141] libmachine: (multinode-168922) Calling .GetIP
	I0828 17:48:31.875663   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:31.876041   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:31.876070   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:31.876309   47471 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 17:48:31.880099   47471 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0828 17:48:31.880211   47471 kubeadm.go:883] updating cluster {Name:multinode-168922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-168922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 17:48:31.880353   47471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:48:31.880410   47471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:48:31.920788   47471 command_runner.go:130] > {
	I0828 17:48:31.920815   47471 command_runner.go:130] >   "images": [
	I0828 17:48:31.920822   47471 command_runner.go:130] >     {
	I0828 17:48:31.920834   47471 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0828 17:48:31.920841   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.920852   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0828 17:48:31.920856   47471 command_runner.go:130] >       ],
	I0828 17:48:31.920860   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.920869   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0828 17:48:31.920876   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0828 17:48:31.920879   47471 command_runner.go:130] >       ],
	I0828 17:48:31.920884   47471 command_runner.go:130] >       "size": "87165492",
	I0828 17:48:31.920888   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.920892   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.920899   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.920906   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.920919   47471 command_runner.go:130] >     },
	I0828 17:48:31.920927   47471 command_runner.go:130] >     {
	I0828 17:48:31.920938   47471 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0828 17:48:31.920945   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.920953   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0828 17:48:31.920958   47471 command_runner.go:130] >       ],
	I0828 17:48:31.920962   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.920969   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0828 17:48:31.920979   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0828 17:48:31.920983   47471 command_runner.go:130] >       ],
	I0828 17:48:31.920988   47471 command_runner.go:130] >       "size": "87190579",
	I0828 17:48:31.920992   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.920998   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921004   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921008   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921011   47471 command_runner.go:130] >     },
	I0828 17:48:31.921015   47471 command_runner.go:130] >     {
	I0828 17:48:31.921021   47471 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0828 17:48:31.921026   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921031   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0828 17:48:31.921035   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921039   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921046   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0828 17:48:31.921056   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0828 17:48:31.921060   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921065   47471 command_runner.go:130] >       "size": "1363676",
	I0828 17:48:31.921069   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.921073   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921078   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921081   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921085   47471 command_runner.go:130] >     },
	I0828 17:48:31.921090   47471 command_runner.go:130] >     {
	I0828 17:48:31.921095   47471 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0828 17:48:31.921099   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921105   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0828 17:48:31.921109   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921113   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921128   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0828 17:48:31.921162   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0828 17:48:31.921170   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921175   47471 command_runner.go:130] >       "size": "31470524",
	I0828 17:48:31.921178   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.921182   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921185   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921189   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921193   47471 command_runner.go:130] >     },
	I0828 17:48:31.921196   47471 command_runner.go:130] >     {
	I0828 17:48:31.921202   47471 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0828 17:48:31.921208   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921213   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0828 17:48:31.921217   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921221   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921230   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0828 17:48:31.921240   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0828 17:48:31.921244   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921248   47471 command_runner.go:130] >       "size": "61245718",
	I0828 17:48:31.921254   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.921258   47471 command_runner.go:130] >       "username": "nonroot",
	I0828 17:48:31.921264   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921268   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921273   47471 command_runner.go:130] >     },
	I0828 17:48:31.921278   47471 command_runner.go:130] >     {
	I0828 17:48:31.921284   47471 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0828 17:48:31.921290   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921295   47471 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0828 17:48:31.921300   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921314   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921321   47471 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0828 17:48:31.921328   47471 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0828 17:48:31.921332   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921336   47471 command_runner.go:130] >       "size": "149009664",
	I0828 17:48:31.921340   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.921343   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.921349   47471 command_runner.go:130] >       },
	I0828 17:48:31.921353   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921357   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921363   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921369   47471 command_runner.go:130] >     },
	I0828 17:48:31.921373   47471 command_runner.go:130] >     {
	I0828 17:48:31.921383   47471 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0828 17:48:31.921387   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921394   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0828 17:48:31.921398   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921401   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921409   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0828 17:48:31.921418   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0828 17:48:31.921421   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921426   47471 command_runner.go:130] >       "size": "95233506",
	I0828 17:48:31.921430   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.921434   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.921440   47471 command_runner.go:130] >       },
	I0828 17:48:31.921444   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921448   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921454   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921457   47471 command_runner.go:130] >     },
	I0828 17:48:31.921461   47471 command_runner.go:130] >     {
	I0828 17:48:31.921469   47471 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0828 17:48:31.921473   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921478   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0828 17:48:31.921483   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921487   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921506   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0828 17:48:31.921516   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0828 17:48:31.921519   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921524   47471 command_runner.go:130] >       "size": "89437512",
	I0828 17:48:31.921529   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.921533   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.921536   47471 command_runner.go:130] >       },
	I0828 17:48:31.921540   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921545   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921551   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921554   47471 command_runner.go:130] >     },
	I0828 17:48:31.921558   47471 command_runner.go:130] >     {
	I0828 17:48:31.921564   47471 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0828 17:48:31.921567   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921572   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0828 17:48:31.921575   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921579   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921586   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0828 17:48:31.921592   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0828 17:48:31.921596   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921599   47471 command_runner.go:130] >       "size": "92728217",
	I0828 17:48:31.921603   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.921607   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921611   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921622   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921626   47471 command_runner.go:130] >     },
	I0828 17:48:31.921629   47471 command_runner.go:130] >     {
	I0828 17:48:31.921635   47471 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0828 17:48:31.921641   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921646   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0828 17:48:31.921652   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921656   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921665   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0828 17:48:31.921672   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0828 17:48:31.921678   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921682   47471 command_runner.go:130] >       "size": "68420936",
	I0828 17:48:31.921685   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.921689   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.921693   47471 command_runner.go:130] >       },
	I0828 17:48:31.921697   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921702   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921705   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921709   47471 command_runner.go:130] >     },
	I0828 17:48:31.921712   47471 command_runner.go:130] >     {
	I0828 17:48:31.921720   47471 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0828 17:48:31.921726   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921730   47471 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0828 17:48:31.921734   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921740   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921747   47471 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0828 17:48:31.921756   47471 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0828 17:48:31.921760   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921764   47471 command_runner.go:130] >       "size": "742080",
	I0828 17:48:31.921767   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.921772   47471 command_runner.go:130] >         "value": "65535"
	I0828 17:48:31.921775   47471 command_runner.go:130] >       },
	I0828 17:48:31.921779   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921783   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921787   47471 command_runner.go:130] >       "pinned": true
	I0828 17:48:31.921793   47471 command_runner.go:130] >     }
	I0828 17:48:31.921797   47471 command_runner.go:130] >   ]
	I0828 17:48:31.921801   47471 command_runner.go:130] > }
	I0828 17:48:31.921978   47471 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 17:48:31.921990   47471 crio.go:433] Images already preloaded, skipping extraction
	I0828 17:48:31.922038   47471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:48:31.953543   47471 command_runner.go:130] > {
	I0828 17:48:31.953590   47471 command_runner.go:130] >   "images": [
	I0828 17:48:31.953596   47471 command_runner.go:130] >     {
	I0828 17:48:31.953604   47471 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0828 17:48:31.953609   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.953615   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0828 17:48:31.953618   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953622   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.953629   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0828 17:48:31.953636   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0828 17:48:31.953640   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953645   47471 command_runner.go:130] >       "size": "87165492",
	I0828 17:48:31.953649   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.953653   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.953660   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.953680   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.953686   47471 command_runner.go:130] >     },
	I0828 17:48:31.953689   47471 command_runner.go:130] >     {
	I0828 17:48:31.953695   47471 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0828 17:48:31.953707   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.953712   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0828 17:48:31.953716   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953720   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.953727   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0828 17:48:31.953735   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0828 17:48:31.953738   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953749   47471 command_runner.go:130] >       "size": "87190579",
	I0828 17:48:31.953762   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.953768   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.953773   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.953777   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.953780   47471 command_runner.go:130] >     },
	I0828 17:48:31.953785   47471 command_runner.go:130] >     {
	I0828 17:48:31.953790   47471 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0828 17:48:31.953794   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.953800   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0828 17:48:31.953804   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953808   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.953816   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0828 17:48:31.953823   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0828 17:48:31.953827   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953831   47471 command_runner.go:130] >       "size": "1363676",
	I0828 17:48:31.953834   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.953839   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.953847   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.953850   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.953854   47471 command_runner.go:130] >     },
	I0828 17:48:31.953857   47471 command_runner.go:130] >     {
	I0828 17:48:31.953863   47471 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0828 17:48:31.953872   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.953876   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0828 17:48:31.953880   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953886   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.953893   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0828 17:48:31.953903   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0828 17:48:31.953910   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953914   47471 command_runner.go:130] >       "size": "31470524",
	I0828 17:48:31.953917   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.953922   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.953925   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.953929   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.953933   47471 command_runner.go:130] >     },
	I0828 17:48:31.953936   47471 command_runner.go:130] >     {
	I0828 17:48:31.953948   47471 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0828 17:48:31.953955   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.953960   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0828 17:48:31.953966   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953969   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.953979   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0828 17:48:31.953986   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0828 17:48:31.953991   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953995   47471 command_runner.go:130] >       "size": "61245718",
	I0828 17:48:31.953999   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.954003   47471 command_runner.go:130] >       "username": "nonroot",
	I0828 17:48:31.954007   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954011   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954014   47471 command_runner.go:130] >     },
	I0828 17:48:31.954017   47471 command_runner.go:130] >     {
	I0828 17:48:31.954023   47471 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0828 17:48:31.954029   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954033   47471 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0828 17:48:31.954037   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954041   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954048   47471 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0828 17:48:31.954056   47471 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0828 17:48:31.954059   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954063   47471 command_runner.go:130] >       "size": "149009664",
	I0828 17:48:31.954069   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.954088   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.954100   47471 command_runner.go:130] >       },
	I0828 17:48:31.954105   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954114   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954118   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954122   47471 command_runner.go:130] >     },
	I0828 17:48:31.954126   47471 command_runner.go:130] >     {
	I0828 17:48:31.954134   47471 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0828 17:48:31.954138   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954146   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0828 17:48:31.954149   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954158   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954168   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0828 17:48:31.954175   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0828 17:48:31.954180   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954184   47471 command_runner.go:130] >       "size": "95233506",
	I0828 17:48:31.954188   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.954195   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.954198   47471 command_runner.go:130] >       },
	I0828 17:48:31.954202   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954208   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954212   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954218   47471 command_runner.go:130] >     },
	I0828 17:48:31.954221   47471 command_runner.go:130] >     {
	I0828 17:48:31.954226   47471 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0828 17:48:31.954232   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954237   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0828 17:48:31.954240   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954244   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954265   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0828 17:48:31.954275   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0828 17:48:31.954279   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954283   47471 command_runner.go:130] >       "size": "89437512",
	I0828 17:48:31.954286   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.954290   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.954293   47471 command_runner.go:130] >       },
	I0828 17:48:31.954421   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954424   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954427   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954431   47471 command_runner.go:130] >     },
	I0828 17:48:31.954434   47471 command_runner.go:130] >     {
	I0828 17:48:31.954442   47471 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0828 17:48:31.954448   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954453   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0828 17:48:31.954456   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954462   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954469   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0828 17:48:31.954487   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0828 17:48:31.954493   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954497   47471 command_runner.go:130] >       "size": "92728217",
	I0828 17:48:31.954501   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.954504   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954508   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954514   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954518   47471 command_runner.go:130] >     },
	I0828 17:48:31.954523   47471 command_runner.go:130] >     {
	I0828 17:48:31.954529   47471 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0828 17:48:31.954535   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954539   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0828 17:48:31.954543   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954547   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954554   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0828 17:48:31.954563   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0828 17:48:31.954566   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954570   47471 command_runner.go:130] >       "size": "68420936",
	I0828 17:48:31.954574   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.954577   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.954589   47471 command_runner.go:130] >       },
	I0828 17:48:31.954593   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954598   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954602   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954608   47471 command_runner.go:130] >     },
	I0828 17:48:31.954611   47471 command_runner.go:130] >     {
	I0828 17:48:31.954617   47471 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0828 17:48:31.954622   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954627   47471 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0828 17:48:31.954630   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954634   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954641   47471 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0828 17:48:31.954650   47471 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0828 17:48:31.954653   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954657   47471 command_runner.go:130] >       "size": "742080",
	I0828 17:48:31.954661   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.954671   47471 command_runner.go:130] >         "value": "65535"
	I0828 17:48:31.954677   47471 command_runner.go:130] >       },
	I0828 17:48:31.954681   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954685   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954689   47471 command_runner.go:130] >       "pinned": true
	I0828 17:48:31.954693   47471 command_runner.go:130] >     }
	I0828 17:48:31.954698   47471 command_runner.go:130] >   ]
	I0828 17:48:31.954701   47471 command_runner.go:130] > }
	I0828 17:48:31.955005   47471 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 17:48:31.955019   47471 cache_images.go:84] Images are preloaded, skipping loading
	I0828 17:48:31.955027   47471 kubeadm.go:934] updating node { 192.168.39.123 8443 v1.31.0 crio true true} ...
	I0828 17:48:31.955121   47471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-168922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-168922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:48:31.955183   47471 ssh_runner.go:195] Run: crio config
	I0828 17:48:31.987357   47471 command_runner.go:130] ! time="2024-08-28 17:48:31.951058285Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0828 17:48:31.993006   47471 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0828 17:48:31.998146   47471 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0828 17:48:31.998165   47471 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0828 17:48:31.998175   47471 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0828 17:48:31.998180   47471 command_runner.go:130] > #
	I0828 17:48:31.998191   47471 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0828 17:48:31.998203   47471 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0828 17:48:31.998211   47471 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0828 17:48:31.998231   47471 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0828 17:48:31.998241   47471 command_runner.go:130] > # reload'.
	I0828 17:48:31.998251   47471 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0828 17:48:31.998263   47471 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0828 17:48:31.998275   47471 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0828 17:48:31.998284   47471 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0828 17:48:31.998288   47471 command_runner.go:130] > [crio]
	I0828 17:48:31.998297   47471 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0828 17:48:31.998304   47471 command_runner.go:130] > # containers images, in this directory.
	I0828 17:48:31.998309   47471 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0828 17:48:31.998319   47471 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0828 17:48:31.998332   47471 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0828 17:48:31.998346   47471 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0828 17:48:31.998354   47471 command_runner.go:130] > # imagestore = ""
	I0828 17:48:31.998360   47471 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0828 17:48:31.998368   47471 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0828 17:48:31.998372   47471 command_runner.go:130] > storage_driver = "overlay"
	I0828 17:48:31.998380   47471 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0828 17:48:31.998386   47471 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0828 17:48:31.998392   47471 command_runner.go:130] > storage_option = [
	I0828 17:48:31.998397   47471 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0828 17:48:31.998400   47471 command_runner.go:130] > ]
	I0828 17:48:31.998406   47471 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0828 17:48:31.998414   47471 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0828 17:48:31.998418   47471 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0828 17:48:31.998425   47471 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0828 17:48:31.998431   47471 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0828 17:48:31.998437   47471 command_runner.go:130] > # always happen on a node reboot
	I0828 17:48:31.998442   47471 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0828 17:48:31.998455   47471 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0828 17:48:31.998463   47471 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0828 17:48:31.998469   47471 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0828 17:48:31.998477   47471 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0828 17:48:31.998483   47471 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0828 17:48:31.998493   47471 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0828 17:48:31.998497   47471 command_runner.go:130] > # internal_wipe = true
	I0828 17:48:31.998505   47471 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0828 17:48:31.998512   47471 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0828 17:48:31.998516   47471 command_runner.go:130] > # internal_repair = false
	I0828 17:48:31.998524   47471 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0828 17:48:31.998530   47471 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0828 17:48:31.998538   47471 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0828 17:48:31.998543   47471 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0828 17:48:31.998551   47471 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0828 17:48:31.998555   47471 command_runner.go:130] > [crio.api]
	I0828 17:48:31.998560   47471 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0828 17:48:31.998565   47471 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0828 17:48:31.998577   47471 command_runner.go:130] > # IP address on which the stream server will listen.
	I0828 17:48:31.998591   47471 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0828 17:48:31.998597   47471 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0828 17:48:31.998601   47471 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0828 17:48:31.998606   47471 command_runner.go:130] > # stream_port = "0"
	I0828 17:48:31.998611   47471 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0828 17:48:31.998617   47471 command_runner.go:130] > # stream_enable_tls = false
	I0828 17:48:31.998622   47471 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0828 17:48:31.998628   47471 command_runner.go:130] > # stream_idle_timeout = ""
	I0828 17:48:31.998637   47471 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0828 17:48:31.998645   47471 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0828 17:48:31.998648   47471 command_runner.go:130] > # minutes.
	I0828 17:48:31.998653   47471 command_runner.go:130] > # stream_tls_cert = ""
	I0828 17:48:31.998658   47471 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0828 17:48:31.998666   47471 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0828 17:48:31.998670   47471 command_runner.go:130] > # stream_tls_key = ""
	I0828 17:48:31.998675   47471 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0828 17:48:31.998683   47471 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0828 17:48:31.998702   47471 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0828 17:48:31.998708   47471 command_runner.go:130] > # stream_tls_ca = ""
	I0828 17:48:31.998715   47471 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0828 17:48:31.998722   47471 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0828 17:48:31.998729   47471 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0828 17:48:31.998735   47471 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0828 17:48:31.998740   47471 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0828 17:48:31.998748   47471 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0828 17:48:31.998751   47471 command_runner.go:130] > [crio.runtime]
	I0828 17:48:31.998759   47471 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0828 17:48:31.998764   47471 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0828 17:48:31.998770   47471 command_runner.go:130] > # "nofile=1024:2048"
	I0828 17:48:31.998792   47471 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0828 17:48:31.998802   47471 command_runner.go:130] > # default_ulimits = [
	I0828 17:48:31.998805   47471 command_runner.go:130] > # ]
	I0828 17:48:31.998811   47471 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0828 17:48:31.998817   47471 command_runner.go:130] > # no_pivot = false
	I0828 17:48:31.998822   47471 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0828 17:48:31.998836   47471 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0828 17:48:31.998843   47471 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0828 17:48:31.998849   47471 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0828 17:48:31.998856   47471 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0828 17:48:31.998862   47471 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0828 17:48:31.998868   47471 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0828 17:48:31.998873   47471 command_runner.go:130] > # Cgroup setting for conmon
	I0828 17:48:31.998881   47471 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0828 17:48:31.998885   47471 command_runner.go:130] > conmon_cgroup = "pod"
	I0828 17:48:31.998892   47471 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0828 17:48:31.998897   47471 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0828 17:48:31.998907   47471 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0828 17:48:31.998913   47471 command_runner.go:130] > conmon_env = [
	I0828 17:48:31.998918   47471 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0828 17:48:31.998924   47471 command_runner.go:130] > ]
	I0828 17:48:31.998929   47471 command_runner.go:130] > # Additional environment variables to set for all the
	I0828 17:48:31.998936   47471 command_runner.go:130] > # containers. These are overridden if set in the
	I0828 17:48:31.998942   47471 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0828 17:48:31.998946   47471 command_runner.go:130] > # default_env = [
	I0828 17:48:31.998949   47471 command_runner.go:130] > # ]
	I0828 17:48:31.998955   47471 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0828 17:48:31.998963   47471 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0828 17:48:31.998967   47471 command_runner.go:130] > # selinux = false
	I0828 17:48:31.998975   47471 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0828 17:48:31.998981   47471 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0828 17:48:31.998989   47471 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0828 17:48:31.998993   47471 command_runner.go:130] > # seccomp_profile = ""
	I0828 17:48:31.999000   47471 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0828 17:48:31.999006   47471 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0828 17:48:31.999013   47471 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0828 17:48:31.999018   47471 command_runner.go:130] > # which might increase security.
	I0828 17:48:31.999026   47471 command_runner.go:130] > # This option is currently deprecated,
	I0828 17:48:31.999031   47471 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0828 17:48:31.999036   47471 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0828 17:48:31.999047   47471 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0828 17:48:31.999056   47471 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0828 17:48:31.999066   47471 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0828 17:48:31.999074   47471 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0828 17:48:31.999079   47471 command_runner.go:130] > # This option supports live configuration reload.
	I0828 17:48:31.999084   47471 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0828 17:48:31.999090   47471 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0828 17:48:31.999096   47471 command_runner.go:130] > # the cgroup blockio controller.
	I0828 17:48:31.999100   47471 command_runner.go:130] > # blockio_config_file = ""
	I0828 17:48:31.999106   47471 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0828 17:48:31.999112   47471 command_runner.go:130] > # blockio parameters.
	I0828 17:48:31.999116   47471 command_runner.go:130] > # blockio_reload = false
	I0828 17:48:31.999125   47471 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0828 17:48:31.999131   47471 command_runner.go:130] > # irqbalance daemon.
	I0828 17:48:31.999138   47471 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0828 17:48:31.999148   47471 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0828 17:48:31.999155   47471 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0828 17:48:31.999163   47471 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0828 17:48:31.999169   47471 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0828 17:48:31.999177   47471 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0828 17:48:31.999182   47471 command_runner.go:130] > # This option supports live configuration reload.
	I0828 17:48:31.999188   47471 command_runner.go:130] > # rdt_config_file = ""
	I0828 17:48:31.999195   47471 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0828 17:48:31.999201   47471 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0828 17:48:31.999246   47471 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0828 17:48:31.999255   47471 command_runner.go:130] > # separate_pull_cgroup = ""
	I0828 17:48:31.999260   47471 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0828 17:48:31.999266   47471 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0828 17:48:31.999270   47471 command_runner.go:130] > # will be added.
	I0828 17:48:31.999274   47471 command_runner.go:130] > # default_capabilities = [
	I0828 17:48:31.999277   47471 command_runner.go:130] > # 	"CHOWN",
	I0828 17:48:31.999281   47471 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0828 17:48:31.999285   47471 command_runner.go:130] > # 	"FSETID",
	I0828 17:48:31.999288   47471 command_runner.go:130] > # 	"FOWNER",
	I0828 17:48:31.999292   47471 command_runner.go:130] > # 	"SETGID",
	I0828 17:48:31.999298   47471 command_runner.go:130] > # 	"SETUID",
	I0828 17:48:31.999302   47471 command_runner.go:130] > # 	"SETPCAP",
	I0828 17:48:31.999306   47471 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0828 17:48:31.999316   47471 command_runner.go:130] > # 	"KILL",
	I0828 17:48:31.999321   47471 command_runner.go:130] > # ]
	I0828 17:48:31.999329   47471 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0828 17:48:31.999338   47471 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0828 17:48:31.999344   47471 command_runner.go:130] > # add_inheritable_capabilities = false
	I0828 17:48:31.999353   47471 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0828 17:48:31.999360   47471 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0828 17:48:31.999366   47471 command_runner.go:130] > default_sysctls = [
	I0828 17:48:31.999371   47471 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0828 17:48:31.999376   47471 command_runner.go:130] > ]
	I0828 17:48:31.999380   47471 command_runner.go:130] > # List of devices on the host that a
	I0828 17:48:31.999386   47471 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0828 17:48:31.999393   47471 command_runner.go:130] > # allowed_devices = [
	I0828 17:48:31.999397   47471 command_runner.go:130] > # 	"/dev/fuse",
	I0828 17:48:31.999402   47471 command_runner.go:130] > # ]
	I0828 17:48:31.999406   47471 command_runner.go:130] > # List of additional devices. specified as
	I0828 17:48:31.999413   47471 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0828 17:48:31.999420   47471 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0828 17:48:31.999428   47471 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0828 17:48:31.999434   47471 command_runner.go:130] > # additional_devices = [
	I0828 17:48:31.999437   47471 command_runner.go:130] > # ]
	I0828 17:48:31.999446   47471 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0828 17:48:31.999450   47471 command_runner.go:130] > # cdi_spec_dirs = [
	I0828 17:48:31.999453   47471 command_runner.go:130] > # 	"/etc/cdi",
	I0828 17:48:31.999457   47471 command_runner.go:130] > # 	"/var/run/cdi",
	I0828 17:48:31.999460   47471 command_runner.go:130] > # ]
	I0828 17:48:31.999466   47471 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0828 17:48:31.999473   47471 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0828 17:48:31.999479   47471 command_runner.go:130] > # Defaults to false.
	I0828 17:48:31.999485   47471 command_runner.go:130] > # device_ownership_from_security_context = false
	I0828 17:48:31.999491   47471 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0828 17:48:31.999499   47471 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0828 17:48:31.999503   47471 command_runner.go:130] > # hooks_dir = [
	I0828 17:48:31.999509   47471 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0828 17:48:31.999512   47471 command_runner.go:130] > # ]
	I0828 17:48:31.999518   47471 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0828 17:48:31.999530   47471 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0828 17:48:31.999538   47471 command_runner.go:130] > # its default mounts from the following two files:
	I0828 17:48:31.999541   47471 command_runner.go:130] > #
	I0828 17:48:31.999546   47471 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0828 17:48:31.999555   47471 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0828 17:48:31.999560   47471 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0828 17:48:31.999565   47471 command_runner.go:130] > #
	I0828 17:48:31.999570   47471 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0828 17:48:31.999581   47471 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0828 17:48:31.999590   47471 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0828 17:48:31.999595   47471 command_runner.go:130] > #      only add mounts it finds in this file.
	I0828 17:48:31.999600   47471 command_runner.go:130] > #
	I0828 17:48:31.999604   47471 command_runner.go:130] > # default_mounts_file = ""
	I0828 17:48:31.999609   47471 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0828 17:48:31.999617   47471 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0828 17:48:31.999622   47471 command_runner.go:130] > pids_limit = 1024
	I0828 17:48:31.999630   47471 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0828 17:48:31.999638   47471 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0828 17:48:31.999644   47471 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0828 17:48:31.999653   47471 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0828 17:48:31.999657   47471 command_runner.go:130] > # log_size_max = -1
	I0828 17:48:31.999663   47471 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0828 17:48:31.999673   47471 command_runner.go:130] > # log_to_journald = false
	I0828 17:48:31.999679   47471 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0828 17:48:31.999686   47471 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0828 17:48:31.999690   47471 command_runner.go:130] > # Path to directory for container attach sockets.
	I0828 17:48:31.999695   47471 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0828 17:48:31.999701   47471 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0828 17:48:31.999705   47471 command_runner.go:130] > # bind_mount_prefix = ""
	I0828 17:48:31.999710   47471 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0828 17:48:31.999715   47471 command_runner.go:130] > # read_only = false
	I0828 17:48:31.999721   47471 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0828 17:48:31.999728   47471 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0828 17:48:31.999733   47471 command_runner.go:130] > # live configuration reload.
	I0828 17:48:31.999738   47471 command_runner.go:130] > # log_level = "info"
	I0828 17:48:31.999744   47471 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0828 17:48:31.999755   47471 command_runner.go:130] > # This option supports live configuration reload.
	I0828 17:48:31.999762   47471 command_runner.go:130] > # log_filter = ""
	I0828 17:48:31.999767   47471 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0828 17:48:31.999776   47471 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0828 17:48:31.999779   47471 command_runner.go:130] > # separated by comma.
	I0828 17:48:31.999786   47471 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0828 17:48:31.999792   47471 command_runner.go:130] > # uid_mappings = ""
	I0828 17:48:31.999798   47471 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0828 17:48:31.999805   47471 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0828 17:48:31.999810   47471 command_runner.go:130] > # separated by comma.
	I0828 17:48:31.999819   47471 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0828 17:48:31.999823   47471 command_runner.go:130] > # gid_mappings = ""
	I0828 17:48:31.999831   47471 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0828 17:48:31.999836   47471 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0828 17:48:31.999844   47471 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0828 17:48:31.999851   47471 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0828 17:48:31.999857   47471 command_runner.go:130] > # minimum_mappable_uid = -1
	I0828 17:48:31.999863   47471 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0828 17:48:31.999869   47471 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0828 17:48:31.999875   47471 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0828 17:48:31.999883   47471 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0828 17:48:31.999891   47471 command_runner.go:130] > # minimum_mappable_gid = -1
	I0828 17:48:31.999897   47471 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0828 17:48:31.999903   47471 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0828 17:48:31.999908   47471 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0828 17:48:31.999914   47471 command_runner.go:130] > # ctr_stop_timeout = 30
	I0828 17:48:31.999920   47471 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0828 17:48:31.999927   47471 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0828 17:48:31.999932   47471 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0828 17:48:31.999939   47471 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0828 17:48:31.999943   47471 command_runner.go:130] > drop_infra_ctr = false
	I0828 17:48:31.999951   47471 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0828 17:48:31.999956   47471 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0828 17:48:31.999965   47471 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0828 17:48:31.999969   47471 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0828 17:48:31.999976   47471 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0828 17:48:31.999987   47471 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0828 17:48:31.999995   47471 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0828 17:48:32.000000   47471 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0828 17:48:32.000007   47471 command_runner.go:130] > # shared_cpuset = ""
	I0828 17:48:32.000013   47471 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0828 17:48:32.000020   47471 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0828 17:48:32.000024   47471 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0828 17:48:32.000031   47471 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0828 17:48:32.000037   47471 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0828 17:48:32.000042   47471 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0828 17:48:32.000049   47471 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0828 17:48:32.000055   47471 command_runner.go:130] > # enable_criu_support = false
	I0828 17:48:32.000060   47471 command_runner.go:130] > # Enable/disable the generation of the container,
	I0828 17:48:32.000068   47471 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0828 17:48:32.000072   47471 command_runner.go:130] > # enable_pod_events = false
	I0828 17:48:32.000080   47471 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0828 17:48:32.000086   47471 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0828 17:48:32.000093   47471 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0828 17:48:32.000097   47471 command_runner.go:130] > # default_runtime = "runc"
	I0828 17:48:32.000103   47471 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0828 17:48:32.000109   47471 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0828 17:48:32.000120   47471 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0828 17:48:32.000129   47471 command_runner.go:130] > # creation as a file is not desired either.
	I0828 17:48:32.000137   47471 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0828 17:48:32.000144   47471 command_runner.go:130] > # the hostname is being managed dynamically.
	I0828 17:48:32.000149   47471 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0828 17:48:32.000155   47471 command_runner.go:130] > # ]
	I0828 17:48:32.000160   47471 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0828 17:48:32.000168   47471 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0828 17:48:32.000174   47471 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0828 17:48:32.000181   47471 command_runner.go:130] > # Each entry in the table should follow the format:
	I0828 17:48:32.000184   47471 command_runner.go:130] > #
	I0828 17:48:32.000188   47471 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0828 17:48:32.000196   47471 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0828 17:48:32.000237   47471 command_runner.go:130] > # runtime_type = "oci"
	I0828 17:48:32.000244   47471 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0828 17:48:32.000253   47471 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0828 17:48:32.000259   47471 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0828 17:48:32.000264   47471 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0828 17:48:32.000270   47471 command_runner.go:130] > # monitor_env = []
	I0828 17:48:32.000275   47471 command_runner.go:130] > # privileged_without_host_devices = false
	I0828 17:48:32.000279   47471 command_runner.go:130] > # allowed_annotations = []
	I0828 17:48:32.000286   47471 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0828 17:48:32.000290   47471 command_runner.go:130] > # Where:
	I0828 17:48:32.000295   47471 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0828 17:48:32.000301   47471 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0828 17:48:32.000307   47471 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0828 17:48:32.000315   47471 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0828 17:48:32.000318   47471 command_runner.go:130] > #   in $PATH.
	I0828 17:48:32.000327   47471 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0828 17:48:32.000333   47471 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0828 17:48:32.000341   47471 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0828 17:48:32.000345   47471 command_runner.go:130] > #   state.
	I0828 17:48:32.000351   47471 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0828 17:48:32.000358   47471 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0828 17:48:32.000364   47471 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0828 17:48:32.000369   47471 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0828 17:48:32.000377   47471 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0828 17:48:32.000383   47471 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0828 17:48:32.000391   47471 command_runner.go:130] > #   The currently recognized values are:
	I0828 17:48:32.000398   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0828 17:48:32.000407   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0828 17:48:32.000413   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0828 17:48:32.000419   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0828 17:48:32.000426   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0828 17:48:32.000435   47471 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0828 17:48:32.000441   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0828 17:48:32.000449   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0828 17:48:32.000455   47471 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0828 17:48:32.000462   47471 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0828 17:48:32.000467   47471 command_runner.go:130] > #   deprecated option "conmon".
	I0828 17:48:32.000476   47471 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0828 17:48:32.000486   47471 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0828 17:48:32.000495   47471 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0828 17:48:32.000499   47471 command_runner.go:130] > #   should be moved to the container's cgroup
	I0828 17:48:32.000507   47471 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0828 17:48:32.000512   47471 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0828 17:48:32.000520   47471 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0828 17:48:32.000525   47471 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0828 17:48:32.000530   47471 command_runner.go:130] > #
	I0828 17:48:32.000535   47471 command_runner.go:130] > # Using the seccomp notifier feature:
	I0828 17:48:32.000538   47471 command_runner.go:130] > #
	I0828 17:48:32.000543   47471 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0828 17:48:32.000551   47471 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0828 17:48:32.000554   47471 command_runner.go:130] > #
	I0828 17:48:32.000562   47471 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0828 17:48:32.000570   47471 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0828 17:48:32.000573   47471 command_runner.go:130] > #
	I0828 17:48:32.000582   47471 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0828 17:48:32.000588   47471 command_runner.go:130] > # feature.
	I0828 17:48:32.000591   47471 command_runner.go:130] > #
	I0828 17:48:32.000596   47471 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0828 17:48:32.000604   47471 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0828 17:48:32.000610   47471 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0828 17:48:32.000620   47471 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0828 17:48:32.000627   47471 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0828 17:48:32.000632   47471 command_runner.go:130] > #
	I0828 17:48:32.000637   47471 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0828 17:48:32.000645   47471 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0828 17:48:32.000648   47471 command_runner.go:130] > #
	I0828 17:48:32.000655   47471 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0828 17:48:32.000662   47471 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0828 17:48:32.000666   47471 command_runner.go:130] > #
	I0828 17:48:32.000672   47471 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0828 17:48:32.000679   47471 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0828 17:48:32.000683   47471 command_runner.go:130] > # limitation.
	I0828 17:48:32.000688   47471 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0828 17:48:32.000694   47471 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0828 17:48:32.000704   47471 command_runner.go:130] > runtime_type = "oci"
	I0828 17:48:32.000711   47471 command_runner.go:130] > runtime_root = "/run/runc"
	I0828 17:48:32.000715   47471 command_runner.go:130] > runtime_config_path = ""
	I0828 17:48:32.000719   47471 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0828 17:48:32.000725   47471 command_runner.go:130] > monitor_cgroup = "pod"
	I0828 17:48:32.000729   47471 command_runner.go:130] > monitor_exec_cgroup = ""
	I0828 17:48:32.000733   47471 command_runner.go:130] > monitor_env = [
	I0828 17:48:32.000738   47471 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0828 17:48:32.000743   47471 command_runner.go:130] > ]
	I0828 17:48:32.000748   47471 command_runner.go:130] > privileged_without_host_devices = false
	I0828 17:48:32.000756   47471 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0828 17:48:32.000761   47471 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0828 17:48:32.000769   47471 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0828 17:48:32.000776   47471 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0828 17:48:32.000785   47471 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0828 17:48:32.000790   47471 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0828 17:48:32.000800   47471 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0828 17:48:32.000808   47471 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0828 17:48:32.000814   47471 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0828 17:48:32.000821   47471 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0828 17:48:32.000824   47471 command_runner.go:130] > # Example:
	I0828 17:48:32.000828   47471 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0828 17:48:32.000833   47471 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0828 17:48:32.000837   47471 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0828 17:48:32.000843   47471 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0828 17:48:32.000847   47471 command_runner.go:130] > # cpuset = 0
	I0828 17:48:32.000852   47471 command_runner.go:130] > # cpushares = "0-1"
	I0828 17:48:32.000856   47471 command_runner.go:130] > # Where:
	I0828 17:48:32.000860   47471 command_runner.go:130] > # The workload name is workload-type.
	I0828 17:48:32.000866   47471 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0828 17:48:32.000871   47471 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0828 17:48:32.000877   47471 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0828 17:48:32.000884   47471 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0828 17:48:32.000889   47471 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0828 17:48:32.000894   47471 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0828 17:48:32.000899   47471 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0828 17:48:32.000907   47471 command_runner.go:130] > # Default value is set to true
	I0828 17:48:32.000912   47471 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0828 17:48:32.000917   47471 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0828 17:48:32.000921   47471 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0828 17:48:32.000925   47471 command_runner.go:130] > # Default value is set to 'false'
	I0828 17:48:32.000928   47471 command_runner.go:130] > # disable_hostport_mapping = false
	I0828 17:48:32.000934   47471 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0828 17:48:32.000937   47471 command_runner.go:130] > #
	I0828 17:48:32.000942   47471 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0828 17:48:32.000947   47471 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0828 17:48:32.000953   47471 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0828 17:48:32.000962   47471 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0828 17:48:32.000966   47471 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0828 17:48:32.000970   47471 command_runner.go:130] > [crio.image]
	I0828 17:48:32.000976   47471 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0828 17:48:32.000983   47471 command_runner.go:130] > # default_transport = "docker://"
	I0828 17:48:32.000989   47471 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0828 17:48:32.000997   47471 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0828 17:48:32.001001   47471 command_runner.go:130] > # global_auth_file = ""
	I0828 17:48:32.001006   47471 command_runner.go:130] > # The image used to instantiate infra containers.
	I0828 17:48:32.001012   47471 command_runner.go:130] > # This option supports live configuration reload.
	I0828 17:48:32.001019   47471 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0828 17:48:32.001025   47471 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0828 17:48:32.001033   47471 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0828 17:48:32.001037   47471 command_runner.go:130] > # This option supports live configuration reload.
	I0828 17:48:32.001045   47471 command_runner.go:130] > # pause_image_auth_file = ""
	I0828 17:48:32.001051   47471 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0828 17:48:32.001056   47471 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0828 17:48:32.001064   47471 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0828 17:48:32.001070   47471 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0828 17:48:32.001076   47471 command_runner.go:130] > # pause_command = "/pause"
	I0828 17:48:32.001082   47471 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0828 17:48:32.001089   47471 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0828 17:48:32.001095   47471 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0828 17:48:32.001103   47471 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0828 17:48:32.001109   47471 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0828 17:48:32.001121   47471 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0828 17:48:32.001128   47471 command_runner.go:130] > # pinned_images = [
	I0828 17:48:32.001131   47471 command_runner.go:130] > # ]
	I0828 17:48:32.001137   47471 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0828 17:48:32.001143   47471 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0828 17:48:32.001149   47471 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0828 17:48:32.001158   47471 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0828 17:48:32.001163   47471 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0828 17:48:32.001169   47471 command_runner.go:130] > # signature_policy = ""
	I0828 17:48:32.001174   47471 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0828 17:48:32.001183   47471 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0828 17:48:32.001191   47471 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0828 17:48:32.001198   47471 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0828 17:48:32.001205   47471 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0828 17:48:32.001210   47471 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0828 17:48:32.001218   47471 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0828 17:48:32.001224   47471 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0828 17:48:32.001228   47471 command_runner.go:130] > # changing them here.
	I0828 17:48:32.001232   47471 command_runner.go:130] > # insecure_registries = [
	I0828 17:48:32.001235   47471 command_runner.go:130] > # ]
	I0828 17:48:32.001241   47471 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0828 17:48:32.001248   47471 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0828 17:48:32.001252   47471 command_runner.go:130] > # image_volumes = "mkdir"
	I0828 17:48:32.001259   47471 command_runner.go:130] > # Temporary directory to use for storing big files
	I0828 17:48:32.001263   47471 command_runner.go:130] > # big_files_temporary_dir = ""
	I0828 17:48:32.001272   47471 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0828 17:48:32.001277   47471 command_runner.go:130] > # CNI plugins.
	I0828 17:48:32.001281   47471 command_runner.go:130] > [crio.network]
	I0828 17:48:32.001286   47471 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0828 17:48:32.001294   47471 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0828 17:48:32.001299   47471 command_runner.go:130] > # cni_default_network = ""
	I0828 17:48:32.001306   47471 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0828 17:48:32.001310   47471 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0828 17:48:32.001316   47471 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0828 17:48:32.001322   47471 command_runner.go:130] > # plugin_dirs = [
	I0828 17:48:32.001326   47471 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0828 17:48:32.001337   47471 command_runner.go:130] > # ]
	I0828 17:48:32.001345   47471 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0828 17:48:32.001349   47471 command_runner.go:130] > [crio.metrics]
	I0828 17:48:32.001353   47471 command_runner.go:130] > # Globally enable or disable metrics support.
	I0828 17:48:32.001359   47471 command_runner.go:130] > enable_metrics = true
	I0828 17:48:32.001363   47471 command_runner.go:130] > # Specify enabled metrics collectors.
	I0828 17:48:32.001370   47471 command_runner.go:130] > # Per default all metrics are enabled.
	I0828 17:48:32.001375   47471 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0828 17:48:32.001384   47471 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0828 17:48:32.001389   47471 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0828 17:48:32.001393   47471 command_runner.go:130] > # metrics_collectors = [
	I0828 17:48:32.001397   47471 command_runner.go:130] > # 	"operations",
	I0828 17:48:32.001401   47471 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0828 17:48:32.001405   47471 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0828 17:48:32.001409   47471 command_runner.go:130] > # 	"operations_errors",
	I0828 17:48:32.001413   47471 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0828 17:48:32.001417   47471 command_runner.go:130] > # 	"image_pulls_by_name",
	I0828 17:48:32.001421   47471 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0828 17:48:32.001425   47471 command_runner.go:130] > # 	"image_pulls_failures",
	I0828 17:48:32.001429   47471 command_runner.go:130] > # 	"image_pulls_successes",
	I0828 17:48:32.001433   47471 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0828 17:48:32.001439   47471 command_runner.go:130] > # 	"image_layer_reuse",
	I0828 17:48:32.001445   47471 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0828 17:48:32.001452   47471 command_runner.go:130] > # 	"containers_oom_total",
	I0828 17:48:32.001455   47471 command_runner.go:130] > # 	"containers_oom",
	I0828 17:48:32.001459   47471 command_runner.go:130] > # 	"processes_defunct",
	I0828 17:48:32.001463   47471 command_runner.go:130] > # 	"operations_total",
	I0828 17:48:32.001468   47471 command_runner.go:130] > # 	"operations_latency_seconds",
	I0828 17:48:32.001472   47471 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0828 17:48:32.001478   47471 command_runner.go:130] > # 	"operations_errors_total",
	I0828 17:48:32.001482   47471 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0828 17:48:32.001488   47471 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0828 17:48:32.001493   47471 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0828 17:48:32.001499   47471 command_runner.go:130] > # 	"image_pulls_success_total",
	I0828 17:48:32.001505   47471 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0828 17:48:32.001513   47471 command_runner.go:130] > # 	"containers_oom_count_total",
	I0828 17:48:32.001522   47471 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0828 17:48:32.001528   47471 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0828 17:48:32.001531   47471 command_runner.go:130] > # ]
	I0828 17:48:32.001539   47471 command_runner.go:130] > # The port on which the metrics server will listen.
	I0828 17:48:32.001543   47471 command_runner.go:130] > # metrics_port = 9090
	I0828 17:48:32.001550   47471 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0828 17:48:32.001554   47471 command_runner.go:130] > # metrics_socket = ""
	I0828 17:48:32.001559   47471 command_runner.go:130] > # The certificate for the secure metrics server.
	I0828 17:48:32.001565   47471 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0828 17:48:32.001573   47471 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0828 17:48:32.001582   47471 command_runner.go:130] > # certificate on any modification event.
	I0828 17:48:32.001588   47471 command_runner.go:130] > # metrics_cert = ""
	I0828 17:48:32.001593   47471 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0828 17:48:32.001599   47471 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0828 17:48:32.001603   47471 command_runner.go:130] > # metrics_key = ""
	I0828 17:48:32.001610   47471 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0828 17:48:32.001614   47471 command_runner.go:130] > [crio.tracing]
	I0828 17:48:32.001621   47471 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0828 17:48:32.001624   47471 command_runner.go:130] > # enable_tracing = false
	I0828 17:48:32.001629   47471 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0828 17:48:32.001636   47471 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0828 17:48:32.001642   47471 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0828 17:48:32.001648   47471 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0828 17:48:32.001652   47471 command_runner.go:130] > # CRI-O NRI configuration.
	I0828 17:48:32.001658   47471 command_runner.go:130] > [crio.nri]
	I0828 17:48:32.001662   47471 command_runner.go:130] > # Globally enable or disable NRI.
	I0828 17:48:32.001666   47471 command_runner.go:130] > # enable_nri = false
	I0828 17:48:32.001670   47471 command_runner.go:130] > # NRI socket to listen on.
	I0828 17:48:32.001674   47471 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0828 17:48:32.001678   47471 command_runner.go:130] > # NRI plugin directory to use.
	I0828 17:48:32.001683   47471 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0828 17:48:32.001689   47471 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0828 17:48:32.001694   47471 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0828 17:48:32.001702   47471 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0828 17:48:32.001706   47471 command_runner.go:130] > # nri_disable_connections = false
	I0828 17:48:32.001714   47471 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0828 17:48:32.001725   47471 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0828 17:48:32.001732   47471 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0828 17:48:32.001736   47471 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0828 17:48:32.001742   47471 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0828 17:48:32.001746   47471 command_runner.go:130] > [crio.stats]
	I0828 17:48:32.001753   47471 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0828 17:48:32.001761   47471 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0828 17:48:32.001766   47471 command_runner.go:130] > # stats_collection_period = 0
	I0828 17:48:32.001896   47471 cni.go:84] Creating CNI manager for ""
	I0828 17:48:32.001907   47471 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0828 17:48:32.001915   47471 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 17:48:32.001934   47471 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-168922 NodeName:multinode-168922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 17:48:32.002061   47471 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-168922"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 17:48:32.002142   47471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 17:48:32.011690   47471 command_runner.go:130] > kubeadm
	I0828 17:48:32.011714   47471 command_runner.go:130] > kubectl
	I0828 17:48:32.011720   47471 command_runner.go:130] > kubelet
	I0828 17:48:32.011794   47471 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 17:48:32.011864   47471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 17:48:32.020720   47471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0828 17:48:32.036520   47471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:48:32.052086   47471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0828 17:48:32.067831   47471 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0828 17:48:32.071989   47471 command_runner.go:130] > 192.168.39.123	control-plane.minikube.internal
	I0828 17:48:32.072060   47471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:48:32.209210   47471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:48:32.223241   47471 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922 for IP: 192.168.39.123
	I0828 17:48:32.223271   47471 certs.go:194] generating shared ca certs ...
	I0828 17:48:32.223293   47471 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:32.223490   47471 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 17:48:32.223561   47471 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 17:48:32.223579   47471 certs.go:256] generating profile certs ...
	I0828 17:48:32.223687   47471 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/client.key
	I0828 17:48:32.223755   47471 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/apiserver.key.b3d25175
	I0828 17:48:32.223791   47471 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/proxy-client.key
	I0828 17:48:32.223807   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0828 17:48:32.223821   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0828 17:48:32.223833   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0828 17:48:32.223846   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0828 17:48:32.223860   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0828 17:48:32.223872   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0828 17:48:32.223885   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0828 17:48:32.223896   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0828 17:48:32.223944   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 17:48:32.223969   47471 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 17:48:32.223978   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:48:32.224000   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 17:48:32.224022   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:48:32.224053   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 17:48:32.224089   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:48:32.224114   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:32.224127   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem -> /usr/share/ca-certificates/17528.pem
	I0828 17:48:32.224138   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /usr/share/ca-certificates/175282.pem
	I0828 17:48:32.224713   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:48:32.248038   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 17:48:32.271219   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:48:32.293508   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:48:32.316064   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 17:48:32.338817   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 17:48:32.362869   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:48:32.386812   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 17:48:32.409780   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:48:32.432046   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 17:48:32.496245   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 17:48:32.525317   47471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 17:48:32.549508   47471 ssh_runner.go:195] Run: openssl version
	I0828 17:48:32.555439   47471 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0828 17:48:32.555593   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:48:32.566985   47471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:32.573703   47471 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:32.573873   47471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:32.573935   47471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:32.579344   47471 command_runner.go:130] > b5213941
	I0828 17:48:32.579611   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:48:32.592987   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 17:48:32.604309   47471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 17:48:32.608808   47471 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:48:32.608837   47471 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:48:32.608884   47471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 17:48:32.615725   47471 command_runner.go:130] > 51391683
	I0828 17:48:32.615793   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 17:48:32.625704   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 17:48:32.636677   47471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 17:48:32.641075   47471 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:48:32.641113   47471 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:48:32.641160   47471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 17:48:32.646670   47471 command_runner.go:130] > 3ec20f2e
	I0828 17:48:32.646741   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 17:48:32.656472   47471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:48:32.660856   47471 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:48:32.660881   47471 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0828 17:48:32.660890   47471 command_runner.go:130] > Device: 253,1	Inode: 9432598     Links: 1
	I0828 17:48:32.660900   47471 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0828 17:48:32.660910   47471 command_runner.go:130] > Access: 2024-08-28 17:41:50.285687991 +0000
	I0828 17:48:32.660917   47471 command_runner.go:130] > Modify: 2024-08-28 17:41:50.285687991 +0000
	I0828 17:48:32.660924   47471 command_runner.go:130] > Change: 2024-08-28 17:41:50.285687991 +0000
	I0828 17:48:32.660932   47471 command_runner.go:130] >  Birth: 2024-08-28 17:41:50.285687991 +0000
	I0828 17:48:32.661003   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 17:48:32.666307   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.666360   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 17:48:32.671887   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.671958   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 17:48:32.677729   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.677873   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 17:48:32.682961   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.683126   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 17:48:32.688268   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.688355   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 17:48:32.693651   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.693719   47471 kubeadm.go:392] StartCluster: {Name:multinode-168922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-168922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:48:32.693865   47471 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 17:48:32.693931   47471 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 17:48:32.728337   47471 command_runner.go:130] > 1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf
	I0828 17:48:32.728359   47471 command_runner.go:130] > 667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3
	I0828 17:48:32.728365   47471 command_runner.go:130] > 5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9
	I0828 17:48:32.728371   47471 command_runner.go:130] > 9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de
	I0828 17:48:32.728383   47471 command_runner.go:130] > 1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3
	I0828 17:48:32.728397   47471 command_runner.go:130] > c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044
	I0828 17:48:32.728405   47471 command_runner.go:130] > 55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5
	I0828 17:48:32.728418   47471 command_runner.go:130] > 6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb
	I0828 17:48:32.729738   47471 cri.go:89] found id: "1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf"
	I0828 17:48:32.729758   47471 cri.go:89] found id: "667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3"
	I0828 17:48:32.729767   47471 cri.go:89] found id: "5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9"
	I0828 17:48:32.729771   47471 cri.go:89] found id: "9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de"
	I0828 17:48:32.729774   47471 cri.go:89] found id: "1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3"
	I0828 17:48:32.729779   47471 cri.go:89] found id: "c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044"
	I0828 17:48:32.729783   47471 cri.go:89] found id: "55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5"
	I0828 17:48:32.729786   47471 cri.go:89] found id: "6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb"
	I0828 17:48:32.729791   47471 cri.go:89] found id: ""
	I0828 17:48:32.729835   47471 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.113058954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867420113035986,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f29347d3-1493-4183-bd3e-bcf0ee04e995 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.113544325Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ec159075-8b5a-48f5-b4fe-109c314d292e name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.113613071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ec159075-8b5a-48f5-b4fe-109c314d292e name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.113949404Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e67573c6cd5d637fe94a33b97bb5ac21820140e1f0b63b3ed11455b2a19604c,PodSandboxId:5d84015002208f99c39a7c17778579437c68d8522ef240799aa84dbabba44e41,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724867353702941795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4a9751041545243372fe3c7ebd584d3d631028f8c8860b143eb68ebb8b8c88,PodSandboxId:578cdc4ec819665c7cf3fa3a71cace3a3d54592c0f046915df67267218819547,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724867320140817245,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbbd014fafc407cf48be9bd50c7cfba0763c7263fee3707ee927c58bf111dda,PodSandboxId:7f82180a771a310e545e0ebed9cd225280a51384e24481f37d3182961eeb46c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724867320045823411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b546b15c13c29ae1cc0717223026cba99879f710b63603c0b5954dfb352e313c,PodSandboxId:b7f09ea8289c274b91b9ad69d06fa731892e7994b27a23b30e7f330b535dc7e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724867319995981306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095cbff545696a84485f28a9daa5523789e557801dc99a7d7ee209eca3982e89,PodSandboxId:67b097f1d50aebe345a8c383cc62bed9ca0f97b3d7c5dd9481a20536f55ba96f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867319940827356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e895bc0bcc591080b27929ee9cd16e8fd286eb028308b99edd01abd5c03e87,PodSandboxId:bb260fa4b64b86edc784df16704b3c93d12ff1e2e7ba1636d9527d4509e328c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724867315164648368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba4190ea9062bbb5115b6a53799d288d83f71c7eac29e774706cc7b278f35e,PodSandboxId:319aae90bad564c48eef1253f22a602062115ae410a50d5f991b1ece84db9ad6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724867315118018307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef2cad6516baabe33a0fbd81b95286afe49a3b207930d675cce0454891a6b31,PodSandboxId:930327a1eae5c6d16d0d42b16a5372f8ff5e6fa5f4e820928c9546cc200d5bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724867315046288713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12d2f20d93694ecb3fd1d1fef2816498d5b36464aee32903123e2862ac647f4,PodSandboxId:231178480a4f873c1d1f63a93649ef0c685127492961aa20759c3ea6e9cd61e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724867314916024483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03efcbef61f9145058aa8094acffb40564ed8dbca79ad3e6505633e939bd7b09,PodSandboxId:8b780384856ae1b3e600b92aac5c41b99c7aa0c769de8301054474a7b3ad232d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724866993144582563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf,PodSandboxId:dc51e5ba8b17b66f40d96dca725d789d2e7f3b53c63004679ea671ccd4528abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724866940418320287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3,PodSandboxId:7d12f7c63c7a5739bf7fb6dd706f7888eb89a4a0438f1bf76da92e3fa550cd6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724866940335367579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9,PodSandboxId:ff4f9df0138d7647619037b98ae86a6d697eb6fad6d55440803000848d088216,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724866928585106349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de,PodSandboxId:cff3c7deaa1f1858ee3a17a5edc84f10acbe148a9914b9c439edd9c01a602457,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724866925145012674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3,PodSandboxId:e95b10cb5007f33008ff0fd6c8201568a1c92ebcb45284026e584fd89b397515,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724866914321523995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5,PodSandboxId:0145b4a0f58cada615abb0158d807d327f87ce9d0a7067a5bb061908fddb8842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724866914238078614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044,PodSandboxId:0ce88a035698f0f15f8863fe47e7e1db3047e3bba21b1dc9524e66e4736f4473,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724866914272241252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb,PodSandboxId:3aba877e633d2199e50d3ae87f488c1e9ee7297b8d31579292f038e504b76862,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724866914201197437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ec159075-8b5a-48f5-b4fe-109c314d292e name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.154332925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=49ab3b81-14d8-40bf-9799-89e78b6692ff name=/runtime.v1.RuntimeService/Version
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.154413980Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=49ab3b81-14d8-40bf-9799-89e78b6692ff name=/runtime.v1.RuntimeService/Version
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.155604932Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e51daef3-550c-4f04-a994-6900fc610cc0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.156085473Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867420156056025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e51daef3-550c-4f04-a994-6900fc610cc0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.156704756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a5436cc-e17b-4ae7-bfa1-61d05686d500 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.156763543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a5436cc-e17b-4ae7-bfa1-61d05686d500 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.157115445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e67573c6cd5d637fe94a33b97bb5ac21820140e1f0b63b3ed11455b2a19604c,PodSandboxId:5d84015002208f99c39a7c17778579437c68d8522ef240799aa84dbabba44e41,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724867353702941795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4a9751041545243372fe3c7ebd584d3d631028f8c8860b143eb68ebb8b8c88,PodSandboxId:578cdc4ec819665c7cf3fa3a71cace3a3d54592c0f046915df67267218819547,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724867320140817245,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbbd014fafc407cf48be9bd50c7cfba0763c7263fee3707ee927c58bf111dda,PodSandboxId:7f82180a771a310e545e0ebed9cd225280a51384e24481f37d3182961eeb46c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724867320045823411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b546b15c13c29ae1cc0717223026cba99879f710b63603c0b5954dfb352e313c,PodSandboxId:b7f09ea8289c274b91b9ad69d06fa731892e7994b27a23b30e7f330b535dc7e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724867319995981306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095cbff545696a84485f28a9daa5523789e557801dc99a7d7ee209eca3982e89,PodSandboxId:67b097f1d50aebe345a8c383cc62bed9ca0f97b3d7c5dd9481a20536f55ba96f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867319940827356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e895bc0bcc591080b27929ee9cd16e8fd286eb028308b99edd01abd5c03e87,PodSandboxId:bb260fa4b64b86edc784df16704b3c93d12ff1e2e7ba1636d9527d4509e328c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724867315164648368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba4190ea9062bbb5115b6a53799d288d83f71c7eac29e774706cc7b278f35e,PodSandboxId:319aae90bad564c48eef1253f22a602062115ae410a50d5f991b1ece84db9ad6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724867315118018307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef2cad6516baabe33a0fbd81b95286afe49a3b207930d675cce0454891a6b31,PodSandboxId:930327a1eae5c6d16d0d42b16a5372f8ff5e6fa5f4e820928c9546cc200d5bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724867315046288713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12d2f20d93694ecb3fd1d1fef2816498d5b36464aee32903123e2862ac647f4,PodSandboxId:231178480a4f873c1d1f63a93649ef0c685127492961aa20759c3ea6e9cd61e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724867314916024483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03efcbef61f9145058aa8094acffb40564ed8dbca79ad3e6505633e939bd7b09,PodSandboxId:8b780384856ae1b3e600b92aac5c41b99c7aa0c769de8301054474a7b3ad232d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724866993144582563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf,PodSandboxId:dc51e5ba8b17b66f40d96dca725d789d2e7f3b53c63004679ea671ccd4528abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724866940418320287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3,PodSandboxId:7d12f7c63c7a5739bf7fb6dd706f7888eb89a4a0438f1bf76da92e3fa550cd6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724866940335367579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9,PodSandboxId:ff4f9df0138d7647619037b98ae86a6d697eb6fad6d55440803000848d088216,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724866928585106349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de,PodSandboxId:cff3c7deaa1f1858ee3a17a5edc84f10acbe148a9914b9c439edd9c01a602457,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724866925145012674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3,PodSandboxId:e95b10cb5007f33008ff0fd6c8201568a1c92ebcb45284026e584fd89b397515,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724866914321523995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5,PodSandboxId:0145b4a0f58cada615abb0158d807d327f87ce9d0a7067a5bb061908fddb8842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724866914238078614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044,PodSandboxId:0ce88a035698f0f15f8863fe47e7e1db3047e3bba21b1dc9524e66e4736f4473,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724866914272241252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb,PodSandboxId:3aba877e633d2199e50d3ae87f488c1e9ee7297b8d31579292f038e504b76862,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724866914201197437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a5436cc-e17b-4ae7-bfa1-61d05686d500 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.195047599Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8943ad73-42d9-4ee5-a318-b13b9044d445 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.195144939Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8943ad73-42d9-4ee5-a318-b13b9044d445 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.196173994Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b5642b1-4e62-4891-97ec-d04fd12d8835 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.196668035Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867420196645734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b5642b1-4e62-4891-97ec-d04fd12d8835 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.197213396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2854d63c-0e43-443c-bb63-ec0bbdbdf7df name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.197278112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2854d63c-0e43-443c-bb63-ec0bbdbdf7df name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.197656896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e67573c6cd5d637fe94a33b97bb5ac21820140e1f0b63b3ed11455b2a19604c,PodSandboxId:5d84015002208f99c39a7c17778579437c68d8522ef240799aa84dbabba44e41,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724867353702941795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4a9751041545243372fe3c7ebd584d3d631028f8c8860b143eb68ebb8b8c88,PodSandboxId:578cdc4ec819665c7cf3fa3a71cace3a3d54592c0f046915df67267218819547,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724867320140817245,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbbd014fafc407cf48be9bd50c7cfba0763c7263fee3707ee927c58bf111dda,PodSandboxId:7f82180a771a310e545e0ebed9cd225280a51384e24481f37d3182961eeb46c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724867320045823411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b546b15c13c29ae1cc0717223026cba99879f710b63603c0b5954dfb352e313c,PodSandboxId:b7f09ea8289c274b91b9ad69d06fa731892e7994b27a23b30e7f330b535dc7e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724867319995981306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095cbff545696a84485f28a9daa5523789e557801dc99a7d7ee209eca3982e89,PodSandboxId:67b097f1d50aebe345a8c383cc62bed9ca0f97b3d7c5dd9481a20536f55ba96f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867319940827356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e895bc0bcc591080b27929ee9cd16e8fd286eb028308b99edd01abd5c03e87,PodSandboxId:bb260fa4b64b86edc784df16704b3c93d12ff1e2e7ba1636d9527d4509e328c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724867315164648368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba4190ea9062bbb5115b6a53799d288d83f71c7eac29e774706cc7b278f35e,PodSandboxId:319aae90bad564c48eef1253f22a602062115ae410a50d5f991b1ece84db9ad6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724867315118018307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef2cad6516baabe33a0fbd81b95286afe49a3b207930d675cce0454891a6b31,PodSandboxId:930327a1eae5c6d16d0d42b16a5372f8ff5e6fa5f4e820928c9546cc200d5bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724867315046288713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12d2f20d93694ecb3fd1d1fef2816498d5b36464aee32903123e2862ac647f4,PodSandboxId:231178480a4f873c1d1f63a93649ef0c685127492961aa20759c3ea6e9cd61e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724867314916024483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03efcbef61f9145058aa8094acffb40564ed8dbca79ad3e6505633e939bd7b09,PodSandboxId:8b780384856ae1b3e600b92aac5c41b99c7aa0c769de8301054474a7b3ad232d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724866993144582563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf,PodSandboxId:dc51e5ba8b17b66f40d96dca725d789d2e7f3b53c63004679ea671ccd4528abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724866940418320287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3,PodSandboxId:7d12f7c63c7a5739bf7fb6dd706f7888eb89a4a0438f1bf76da92e3fa550cd6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724866940335367579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9,PodSandboxId:ff4f9df0138d7647619037b98ae86a6d697eb6fad6d55440803000848d088216,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724866928585106349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de,PodSandboxId:cff3c7deaa1f1858ee3a17a5edc84f10acbe148a9914b9c439edd9c01a602457,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724866925145012674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3,PodSandboxId:e95b10cb5007f33008ff0fd6c8201568a1c92ebcb45284026e584fd89b397515,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724866914321523995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5,PodSandboxId:0145b4a0f58cada615abb0158d807d327f87ce9d0a7067a5bb061908fddb8842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724866914238078614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044,PodSandboxId:0ce88a035698f0f15f8863fe47e7e1db3047e3bba21b1dc9524e66e4736f4473,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724866914272241252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb,PodSandboxId:3aba877e633d2199e50d3ae87f488c1e9ee7297b8d31579292f038e504b76862,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724866914201197437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2854d63c-0e43-443c-bb63-ec0bbdbdf7df name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.238043945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8adaaeb-4018-47ea-9501-3da89d7ec02c name=/runtime.v1.RuntimeService/Version
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.238117391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8adaaeb-4018-47ea-9501-3da89d7ec02c name=/runtime.v1.RuntimeService/Version
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.239133048Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9c3d09b-2335-4806-b928-068c219ad74f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.239719047Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867420239691835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9c3d09b-2335-4806-b928-068c219ad74f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.240240635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9010915-5b73-4342-9e42-535e77022aa9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.240296888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9010915-5b73-4342-9e42-535e77022aa9 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:50:20 multinode-168922 crio[2742]: time="2024-08-28 17:50:20.240700223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e67573c6cd5d637fe94a33b97bb5ac21820140e1f0b63b3ed11455b2a19604c,PodSandboxId:5d84015002208f99c39a7c17778579437c68d8522ef240799aa84dbabba44e41,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724867353702941795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4a9751041545243372fe3c7ebd584d3d631028f8c8860b143eb68ebb8b8c88,PodSandboxId:578cdc4ec819665c7cf3fa3a71cace3a3d54592c0f046915df67267218819547,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724867320140817245,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbbd014fafc407cf48be9bd50c7cfba0763c7263fee3707ee927c58bf111dda,PodSandboxId:7f82180a771a310e545e0ebed9cd225280a51384e24481f37d3182961eeb46c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724867320045823411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b546b15c13c29ae1cc0717223026cba99879f710b63603c0b5954dfb352e313c,PodSandboxId:b7f09ea8289c274b91b9ad69d06fa731892e7994b27a23b30e7f330b535dc7e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724867319995981306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095cbff545696a84485f28a9daa5523789e557801dc99a7d7ee209eca3982e89,PodSandboxId:67b097f1d50aebe345a8c383cc62bed9ca0f97b3d7c5dd9481a20536f55ba96f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867319940827356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e895bc0bcc591080b27929ee9cd16e8fd286eb028308b99edd01abd5c03e87,PodSandboxId:bb260fa4b64b86edc784df16704b3c93d12ff1e2e7ba1636d9527d4509e328c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724867315164648368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba4190ea9062bbb5115b6a53799d288d83f71c7eac29e774706cc7b278f35e,PodSandboxId:319aae90bad564c48eef1253f22a602062115ae410a50d5f991b1ece84db9ad6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724867315118018307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef2cad6516baabe33a0fbd81b95286afe49a3b207930d675cce0454891a6b31,PodSandboxId:930327a1eae5c6d16d0d42b16a5372f8ff5e6fa5f4e820928c9546cc200d5bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724867315046288713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12d2f20d93694ecb3fd1d1fef2816498d5b36464aee32903123e2862ac647f4,PodSandboxId:231178480a4f873c1d1f63a93649ef0c685127492961aa20759c3ea6e9cd61e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724867314916024483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03efcbef61f9145058aa8094acffb40564ed8dbca79ad3e6505633e939bd7b09,PodSandboxId:8b780384856ae1b3e600b92aac5c41b99c7aa0c769de8301054474a7b3ad232d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724866993144582563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf,PodSandboxId:dc51e5ba8b17b66f40d96dca725d789d2e7f3b53c63004679ea671ccd4528abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724866940418320287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3,PodSandboxId:7d12f7c63c7a5739bf7fb6dd706f7888eb89a4a0438f1bf76da92e3fa550cd6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724866940335367579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9,PodSandboxId:ff4f9df0138d7647619037b98ae86a6d697eb6fad6d55440803000848d088216,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724866928585106349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de,PodSandboxId:cff3c7deaa1f1858ee3a17a5edc84f10acbe148a9914b9c439edd9c01a602457,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724866925145012674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3,PodSandboxId:e95b10cb5007f33008ff0fd6c8201568a1c92ebcb45284026e584fd89b397515,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724866914321523995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5,PodSandboxId:0145b4a0f58cada615abb0158d807d327f87ce9d0a7067a5bb061908fddb8842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724866914238078614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044,PodSandboxId:0ce88a035698f0f15f8863fe47e7e1db3047e3bba21b1dc9524e66e4736f4473,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724866914272241252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb,PodSandboxId:3aba877e633d2199e50d3ae87f488c1e9ee7297b8d31579292f038e504b76862,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724866914201197437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9010915-5b73-4342-9e42-535e77022aa9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3e67573c6cd5d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   5d84015002208       busybox-7dff88458-w6glt
	0c4a975104154       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   578cdc4ec8196       kindnet-x4zf2
	0dbbd014fafc4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   7f82180a771a3       coredns-6f6b679f8f-6r6bx
	b546b15c13c29       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   b7f09ea8289c2       kube-proxy-476qk
	095cbff545696       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   67b097f1d50ae       storage-provisioner
	20e895bc0bcc5       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   bb260fa4b64b8       kube-scheduler-multinode-168922
	ddba4190ea906       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   319aae90bad56       kube-controller-manager-multinode-168922
	2ef2cad6516ba       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   930327a1eae5c       etcd-multinode-168922
	f12d2f20d9369       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   231178480a4f8       kube-apiserver-multinode-168922
	03efcbef61f91       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   8b780384856ae       busybox-7dff88458-w6glt
	1d5f30bd1d002       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   dc51e5ba8b17b       coredns-6f6b679f8f-6r6bx
	667b50a5c0e58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   7d12f7c63c7a5       storage-provisioner
	5cb61f5b3dfed       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   ff4f9df0138d7       kindnet-x4zf2
	9e3d7d32be036       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   cff3c7deaa1f1       kube-proxy-476qk
	1e68bf808c05d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   e95b10cb5007f       kube-scheduler-multinode-168922
	c8e59f37886db       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   0ce88a035698f       etcd-multinode-168922
	55546ecd55f3c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   0145b4a0f58ca       kube-controller-manager-multinode-168922
	6ca1265a851f1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   3aba877e633d2       kube-apiserver-multinode-168922
	
	
	==> coredns [0dbbd014fafc407cf48be9bd50c7cfba0763c7263fee3707ee927c58bf111dda] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45850 - 49312 "HINFO IN 6346308860311285144.4892433205018066843. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011916744s
	
	
	==> coredns [1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf] <==
	[INFO] 10.244.0.3:38946 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001763349s
	[INFO] 10.244.0.3:46500 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096885s
	[INFO] 10.244.0.3:56452 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000044097s
	[INFO] 10.244.0.3:46737 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001048857s
	[INFO] 10.244.0.3:37993 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119866s
	[INFO] 10.244.0.3:45844 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000041729s
	[INFO] 10.244.0.3:54320 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066208s
	[INFO] 10.244.1.2:48237 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128465s
	[INFO] 10.244.1.2:46842 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077743s
	[INFO] 10.244.1.2:43073 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000188905s
	[INFO] 10.244.1.2:60806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067102s
	[INFO] 10.244.0.3:42715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161353s
	[INFO] 10.244.0.3:33886 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007964s
	[INFO] 10.244.0.3:47500 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005551s
	[INFO] 10.244.0.3:54660 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072009s
	[INFO] 10.244.1.2:34764 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142289s
	[INFO] 10.244.1.2:49999 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156371s
	[INFO] 10.244.1.2:56180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000178092s
	[INFO] 10.244.1.2:41725 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111127s
	[INFO] 10.244.0.3:33904 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123923s
	[INFO] 10.244.0.3:37785 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000048683s
	[INFO] 10.244.0.3:38463 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000042744s
	[INFO] 10.244.0.3:40587 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000026774s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-168922
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-168922
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=multinode-168922
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T17_42_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:41:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-168922
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:50:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:48:38 +0000   Wed, 28 Aug 2024 17:41:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:48:38 +0000   Wed, 28 Aug 2024 17:41:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:48:38 +0000   Wed, 28 Aug 2024 17:41:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:48:38 +0000   Wed, 28 Aug 2024 17:42:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    multinode-168922
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 893dedabe52d4c32aacf04c2fe93fe01
	  System UUID:                893dedab-e52d-4c32-aacf-04c2fe93fe01
	  Boot ID:                    c015ee5d-f4eb-4aa8-927b-878dcd67f40e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w6glt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m11s
	  kube-system                 coredns-6f6b679f8f-6r6bx                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m16s
	  kube-system                 etcd-multinode-168922                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m21s
	  kube-system                 kindnet-x4zf2                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m16s
	  kube-system                 kube-apiserver-multinode-168922             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-controller-manager-multinode-168922    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-proxy-476qk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-scheduler-multinode-168922             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m15s                kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m21s                kubelet          Node multinode-168922 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m21s                kubelet          Node multinode-168922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m21s                kubelet          Node multinode-168922 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m21s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m17s                node-controller  Node multinode-168922 event: Registered Node multinode-168922 in Controller
	  Normal  NodeReady                8m1s                 kubelet          Node multinode-168922 status is now: NodeReady
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s (x8 over 106s)  kubelet          Node multinode-168922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 106s)  kubelet          Node multinode-168922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 106s)  kubelet          Node multinode-168922 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                  node-controller  Node multinode-168922 event: Registered Node multinode-168922 in Controller
	
	
	Name:               multinode-168922-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-168922-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=multinode-168922
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_49_19_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:49:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-168922-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:50:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:49:49 +0000   Wed, 28 Aug 2024 17:49:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:49:49 +0000   Wed, 28 Aug 2024 17:49:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:49:49 +0000   Wed, 28 Aug 2024 17:49:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:49:49 +0000   Wed, 28 Aug 2024 17:49:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.88
	  Hostname:    multinode-168922-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 310348f5dff14d28a2b7e382628a42ce
	  System UUID:                310348f5-dff1-4d28-a2b7-e382628a42ce
	  Boot ID:                    9f706423-b9e2-466e-a8e4-0097c0758b92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-45tvk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kindnet-h7clw              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m33s
	  kube-system                 kube-proxy-z6fk7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 57s                    kube-proxy  
	  Normal  Starting                 7m28s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  7m33s (x2 over 7m33s)  kubelet     Node multinode-168922-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m33s (x2 over 7m33s)  kubelet     Node multinode-168922-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m33s (x2 over 7m33s)  kubelet     Node multinode-168922-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 7m33s                  kubelet     Starting kubelet.
	  Normal  NodeReady                7m13s                  kubelet     Node multinode-168922-m02 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-168922-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-168922-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-168922-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                42s                    kubelet     Node multinode-168922-m02 status is now: NodeReady
	
	
	Name:               multinode-168922-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-168922-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=multinode-168922
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_49_57_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:49:57 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-168922-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:50:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:50:17 +0000   Wed, 28 Aug 2024 17:49:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:50:17 +0000   Wed, 28 Aug 2024 17:49:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:50:17 +0000   Wed, 28 Aug 2024 17:49:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:50:17 +0000   Wed, 28 Aug 2024 17:50:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.13
	  Hostname:    multinode-168922-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c96252c4ad94c578b82908bef82caa2
	  System UUID:                0c96252c-4ad9-4c57-8b82-908bef82caa2
	  Boot ID:                    4f867802-292e-48af-98cc-bd0c46844466
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5ct7d       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m36s
	  kube-system                 kube-proxy-mfl7g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m41s                  kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  Starting                 6m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m36s (x2 over 6m37s)  kubelet          Node multinode-168922-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s (x2 over 6m37s)  kubelet          Node multinode-168922-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s (x2 over 6m37s)  kubelet          Node multinode-168922-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m15s                  kubelet          Node multinode-168922-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet          Node multinode-168922-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet          Node multinode-168922-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet          Node multinode-168922-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m26s                  kubelet          Node multinode-168922-m03 status is now: NodeReady
	  Normal  CIDRAssignmentFailed     23s                    cidrAllocator    Node multinode-168922-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet          Node multinode-168922-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet          Node multinode-168922-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet          Node multinode-168922-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19s                    node-controller  Node multinode-168922-m03 event: Registered Node multinode-168922-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-168922-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.060442] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052529] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.186620] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.125003] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.273796] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.757692] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +4.017842] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.058820] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.994664] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.087752] kauditd_printk_skb: 69 callbacks suppressed
	[Aug28 17:42] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.088666] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.021200] kauditd_printk_skb: 65 callbacks suppressed
	[Aug28 17:43] kauditd_printk_skb: 14 callbacks suppressed
	[Aug28 17:48] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.145114] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +0.170124] systemd-fstab-generator[2693]: Ignoring "noauto" option for root device
	[  +0.133913] systemd-fstab-generator[2705]: Ignoring "noauto" option for root device
	[  +0.269141] systemd-fstab-generator[2733]: Ignoring "noauto" option for root device
	[  +0.631840] systemd-fstab-generator[2827]: Ignoring "noauto" option for root device
	[  +2.098237] systemd-fstab-generator[2980]: Ignoring "noauto" option for root device
	[  +5.665904] kauditd_printk_skb: 184 callbacks suppressed
	[  +7.431810] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.124744] systemd-fstab-generator[3797]: Ignoring "noauto" option for root device
	[Aug28 17:49] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [2ef2cad6516baabe33a0fbd81b95286afe49a3b207930d675cce0454891a6b31] <==
	{"level":"info","ts":"2024-08-28T17:48:35.490281Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b780dcaae8448687","local-member-id":"4c9b6dd9118b591e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:48:35.490321Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:48:35.492995Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:48:35.514575Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-28T17:48:35.514862Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4c9b6dd9118b591e","initial-advertise-peer-urls":["https://192.168.39.123:2380"],"listen-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-28T17:48:35.514904Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-28T17:48:35.515020Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-08-28T17:48:35.515042Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-08-28T17:48:37.231634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-28T17:48:37.231686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-28T17:48:37.231725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgPreVoteResp from 4c9b6dd9118b591e at term 2"}
	{"level":"info","ts":"2024-08-28T17:48:37.231741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became candidate at term 3"}
	{"level":"info","ts":"2024-08-28T17:48:37.231746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgVoteResp from 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-08-28T17:48:37.231766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became leader at term 3"}
	{"level":"info","ts":"2024-08-28T17:48:37.231773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4c9b6dd9118b591e elected leader 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-08-28T17:48:37.236861Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:48:37.237127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:48:37.236862Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4c9b6dd9118b591e","local-member-attributes":"{Name:multinode-168922 ClientURLs:[https://192.168.39.123:2379]}","request-path":"/0/members/4c9b6dd9118b591e/attributes","cluster-id":"b780dcaae8448687","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T17:48:37.237548Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T17:48:37.237586Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T17:48:37.238065Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:48:37.238240Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:48:37.239051Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.123:2379"}
	{"level":"info","ts":"2024-08-28T17:48:37.239286Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-28T17:50:02.093468Z","caller":"traceutil/trace.go:171","msg":"trace[2062693631] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"126.420516ms","start":"2024-08-28T17:50:01.966968Z","end":"2024-08-28T17:50:02.093388Z","steps":["trace[2062693631] 'process raft request'  (duration: 104.017473ms)","trace[2062693631] 'compare'  (duration: 22.043822ms)"],"step_count":2}
	
	
	==> etcd [c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044] <==
	{"level":"info","ts":"2024-08-28T17:41:55.322013Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:41:55.322660Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:41:55.323315Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.123:2379"}
	{"level":"info","ts":"2024-08-28T17:41:55.336479Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T17:41:55.336561Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T17:42:47.842684Z","caller":"traceutil/trace.go:171","msg":"trace[1823284932] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"228.824725ms","start":"2024-08-28T17:42:47.613839Z","end":"2024-08-28T17:42:47.842663Z","steps":["trace[1823284932] 'process raft request'  (duration: 215.48225ms)","trace[1823284932] 'compare'  (duration: 13.068972ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-28T17:42:47.842668Z","caller":"traceutil/trace.go:171","msg":"trace[1346513069] linearizableReadLoop","detail":"{readStateIndex:456; appliedIndex:455; }","duration":"226.914229ms","start":"2024-08-28T17:42:47.615711Z","end":"2024-08-28T17:42:47.842625Z","steps":["trace[1346513069] 'read index received'  (duration: 213.566092ms)","trace[1346513069] 'applied index is now lower than readState.Index'  (duration: 13.346933ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T17:42:47.842821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.087554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-168922-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T17:42:47.843039Z","caller":"traceutil/trace.go:171","msg":"trace[788604680] range","detail":"{range_begin:/registry/csinodes/multinode-168922-m02; range_end:; response_count:0; response_revision:439; }","duration":"227.341163ms","start":"2024-08-28T17:42:47.615685Z","end":"2024-08-28T17:42:47.843026Z","steps":["trace[788604680] 'agreement among raft nodes before linearized reading'  (duration: 227.032084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:42:47.843164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.986123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-168922-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T17:42:47.843197Z","caller":"traceutil/trace.go:171","msg":"trace[612802627] range","detail":"{range_begin:/registry/minions/multinode-168922-m02; range_end:; response_count:0; response_revision:439; }","duration":"217.025739ms","start":"2024-08-28T17:42:47.626166Z","end":"2024-08-28T17:42:47.843192Z","steps":["trace[612802627] 'agreement among raft nodes before linearized reading'  (duration: 216.974528ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:42:47.843347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.289657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T17:42:47.845070Z","caller":"traceutil/trace.go:171","msg":"trace[858011392] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:439; }","duration":"179.012278ms","start":"2024-08-28T17:42:47.666045Z","end":"2024-08-28T17:42:47.845058Z","steps":["trace[858011392] 'agreement among raft nodes before linearized reading'  (duration: 177.269946ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:43:44.051107Z","caller":"traceutil/trace.go:171","msg":"trace[1894492021] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"135.577124ms","start":"2024-08-28T17:43:43.915492Z","end":"2024-08-28T17:43:44.051069Z","steps":["trace[1894492021] 'process raft request'  (duration: 131.536552ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:44:42.498043Z","caller":"traceutil/trace.go:171","msg":"trace[1597191940] transaction","detail":"{read_only:false; response_revision:714; number_of_response:1; }","duration":"108.333359ms","start":"2024-08-28T17:44:42.389681Z","end":"2024-08-28T17:44:42.498015Z","steps":["trace[1597191940] 'process raft request'  (duration: 107.998423ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:46:59.394169Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-28T17:46:59.394325Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-168922","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"]}
	{"level":"warn","ts":"2024-08-28T17:46:59.400746Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-28T17:46:59.400893Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-28T17:46:59.479790Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.123:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-28T17:46:59.479846Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.123:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-28T17:46:59.479914Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4c9b6dd9118b591e","current-leader-member-id":"4c9b6dd9118b591e"}
	{"level":"info","ts":"2024-08-28T17:46:59.482629Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-08-28T17:46:59.482737Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-08-28T17:46:59.482746Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-168922","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"]}
	
	
	==> kernel <==
	 17:50:20 up 8 min,  0 users,  load average: 0.18, 0.18, 0.10
	Linux multinode-168922 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0c4a9751041545243372fe3c7ebd584d3d631028f8c8860b143eb68ebb8b8c88] <==
	I0828 17:49:31.101323       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:49:41.099859       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:49:41.099996       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:49:41.100222       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:49:41.100290       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	I0828 17:49:41.100561       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:49:41.100621       1 main.go:299] handling current node
	I0828 17:49:51.099145       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:49:51.099287       1 main.go:299] handling current node
	I0828 17:49:51.099317       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:49:51.099335       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:49:51.099549       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:49:51.099589       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	I0828 17:50:01.102061       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:50:01.102222       1 main.go:299] handling current node
	I0828 17:50:01.102258       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:50:01.102285       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:50:01.102575       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:50:01.102621       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.2.0/24] 
	I0828 17:50:11.102259       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:50:11.102469       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.2.0/24] 
	I0828 17:50:11.102645       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:50:11.102675       1 main.go:299] handling current node
	I0828 17:50:11.102710       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:50:11.102741       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9] <==
	I0828 17:46:09.509378       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	I0828 17:46:19.513653       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:46:19.513842       1 main.go:299] handling current node
	I0828 17:46:19.513899       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:46:19.513918       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:46:19.514093       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:46:19.514117       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	I0828 17:46:29.510483       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:46:29.510602       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:46:29.510818       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:46:29.510858       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	I0828 17:46:29.510933       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:46:29.510952       1 main.go:299] handling current node
	I0828 17:46:39.512671       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:46:39.512715       1 main.go:299] handling current node
	I0828 17:46:39.512740       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:46:39.512746       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:46:39.512893       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:46:39.512913       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	I0828 17:46:49.517529       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:46:49.517581       1 main.go:299] handling current node
	I0828 17:46:49.517595       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:46:49.517600       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:46:49.517774       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:46:49.517780       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb] <==
	W0828 17:41:58.342634       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.123]
	I0828 17:41:58.343596       1 controller.go:615] quota admission added evaluator for: endpoints
	I0828 17:41:58.352557       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0828 17:41:58.703893       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0828 17:41:59.330301       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0828 17:41:59.345569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0828 17:41:59.358263       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0828 17:42:04.057288       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0828 17:42:04.459231       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0828 17:43:14.354376       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36792: use of closed network connection
	E0828 17:43:14.520735       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36816: use of closed network connection
	E0828 17:43:14.685161       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36844: use of closed network connection
	E0828 17:43:14.848621       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36860: use of closed network connection
	E0828 17:43:15.006390       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36882: use of closed network connection
	E0828 17:43:15.172086       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36890: use of closed network connection
	E0828 17:43:15.438132       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36926: use of closed network connection
	E0828 17:43:15.597985       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36930: use of closed network connection
	E0828 17:43:15.762140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:56170: use of closed network connection
	E0828 17:43:15.932517       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:56180: use of closed network connection
	I0828 17:46:59.395852       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0828 17:46:59.418262       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 17:46:59.437573       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 17:46:59.437640       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 17:46:59.437680       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 17:46:59.437880       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f12d2f20d93694ecb3fd1d1fef2816498d5b36464aee32903123e2862ac647f4] <==
	I0828 17:48:38.519753       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0828 17:48:38.545246       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0828 17:48:38.545327       1 policy_source.go:224] refreshing policies
	I0828 17:48:38.549509       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0828 17:48:38.551687       1 shared_informer.go:320] Caches are synced for configmaps
	E0828 17:48:38.552138       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0828 17:48:38.555041       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0828 17:48:38.556573       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0828 17:48:38.559603       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0828 17:48:38.559501       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0828 17:48:38.559515       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0828 17:48:38.565772       1 aggregator.go:171] initial CRD sync complete...
	I0828 17:48:38.565810       1 autoregister_controller.go:144] Starting autoregister controller
	I0828 17:48:38.565817       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0828 17:48:38.565823       1 cache.go:39] Caches are synced for autoregister controller
	I0828 17:48:38.591801       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0828 17:48:38.611534       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0828 17:48:39.426026       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0828 17:48:40.727704       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0828 17:48:40.856629       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0828 17:48:40.871185       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0828 17:48:40.939398       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0828 17:48:40.945529       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0828 17:48:41.974663       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0828 17:48:42.225664       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5] <==
	I0828 17:44:33.189336       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m02"
	I0828 17:44:34.333786       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m02"
	I0828 17:44:34.333861       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-168922-m03\" does not exist"
	I0828 17:44:34.363363       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-168922-m03" podCIDRs=["10.244.3.0/24"]
	I0828 17:44:34.363533       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	E0828 17:44:34.363773       1 range_allocator.go:410] "Node already has a CIDR allocated. Releasing the new one" logger="node-ipam-controller" node="multinode-168922-m03" podCIDRs=["10.244.3.0/24"]
	I0828 17:44:34.363797       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:34.364086       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:34.709495       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:35.045367       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:38.535267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:44.438401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:54.057950       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m02"
	I0828 17:44:54.058107       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:54.069787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:58.515376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:45:33.531248       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m02"
	I0828 17:45:33.531531       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m03"
	I0828 17:45:33.550864       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m02"
	I0828 17:45:33.580689       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.654595ms"
	I0828 17:45:33.580942       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.52µs"
	I0828 17:45:38.581620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:45:38.596910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:45:38.613211       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m02"
	I0828 17:45:48.681029       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	
	
	==> kube-controller-manager [ddba4190ea9062bbb5115b6a53799d288d83f71c7eac29e774706cc7b278f35e] <==
	I0828 17:49:38.619123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="151.445µs"
	I0828 17:49:38.634587       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.381µs"
	I0828 17:49:41.960204       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m02"
	I0828 17:49:42.934810       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.986935ms"
	I0828 17:49:42.935035       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="113.733µs"
	I0828 17:49:49.461839       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m02"
	I0828 17:49:56.318361       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:49:56.335176       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:49:56.547268       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:49:56.547487       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m02"
	I0828 17:49:57.532517       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-168922-m03\" does not exist"
	I0828 17:49:57.532780       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m02"
	I0828 17:49:57.566554       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-168922-m03" podCIDRs=["10.244.2.0/24"]
	I0828 17:49:57.566598       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	E0828 17:49:57.589531       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-168922-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-168922-m03" podCIDRs=["10.244.3.0/24"]
	E0828 17:49:57.589625       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-168922-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-168922-m03"
	E0828 17:49:57.590006       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-168922-m03': failed to patch node CIDR: Node \"multinode-168922-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0828 17:49:57.590082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:49:57.595292       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:49:57.942914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:02.096365       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:07.703000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:17.383581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:17.384203       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m02"
	I0828 17:50:17.397626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	
	
	==> kube-proxy [9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 17:42:05.355860       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 17:42:05.363996       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	E0828 17:42:05.364191       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 17:42:05.398626       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 17:42:05.398705       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 17:42:05.398733       1 server_linux.go:169] "Using iptables Proxier"
	I0828 17:42:05.401093       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 17:42:05.401590       1 server.go:483] "Version info" version="v1.31.0"
	I0828 17:42:05.401615       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:42:05.403077       1 config.go:197] "Starting service config controller"
	I0828 17:42:05.403121       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 17:42:05.403142       1 config.go:104] "Starting endpoint slice config controller"
	I0828 17:42:05.403158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 17:42:05.403681       1 config.go:326] "Starting node config controller"
	I0828 17:42:05.403703       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 17:42:05.503393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 17:42:05.503501       1 shared_informer.go:320] Caches are synced for service config
	I0828 17:42:05.503729       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b546b15c13c29ae1cc0717223026cba99879f710b63603c0b5954dfb352e313c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 17:48:40.402580       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 17:48:40.425334       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	E0828 17:48:40.425405       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 17:48:40.539562       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 17:48:40.539621       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 17:48:40.539649       1 server_linux.go:169] "Using iptables Proxier"
	I0828 17:48:40.557848       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 17:48:40.558099       1 server.go:483] "Version info" version="v1.31.0"
	I0828 17:48:40.558124       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:48:40.565628       1 config.go:197] "Starting service config controller"
	I0828 17:48:40.565667       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 17:48:40.565689       1 config.go:104] "Starting endpoint slice config controller"
	I0828 17:48:40.565693       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 17:48:40.566109       1 config.go:326] "Starting node config controller"
	I0828 17:48:40.566135       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 17:48:40.665801       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 17:48:40.665883       1 shared_informer.go:320] Caches are synced for service config
	I0828 17:48:40.666206       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3] <==
	E0828 17:41:56.742108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:56.742147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 17:41:56.742171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:56.742254       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0828 17:41:56.742278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:56.742342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 17:41:56.742365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:56.742377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0828 17:41:56.742384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:56.742409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0828 17:41:56.743578       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 17:41:56.743644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0828 17:41:56.744473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:57.761610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 17:41:57.761666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:57.772364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0828 17:41:57.772407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:57.975304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 17:41:57.975375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:58.007618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 17:41:58.007694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:58.144941       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 17:41:58.145086       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0828 17:42:01.328481       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0828 17:46:59.397060       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [20e895bc0bcc591080b27929ee9cd16e8fd286eb028308b99edd01abd5c03e87] <==
	I0828 17:48:35.948806       1 serving.go:386] Generated self-signed cert in-memory
	W0828 17:48:38.515703       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 17:48:38.515800       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 17:48:38.515829       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 17:48:38.515859       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 17:48:38.541974       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0828 17:48:38.542079       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:48:38.549112       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0828 17:48:38.549331       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 17:48:38.549726       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 17:48:38.549822       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0828 17:48:38.650586       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 17:48:44 multinode-168922 kubelet[2987]: E0828 17:48:44.481751    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867324480234449,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:48:47 multinode-168922 kubelet[2987]: I0828 17:48:47.239549    2987 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 28 17:48:54 multinode-168922 kubelet[2987]: E0828 17:48:54.483651    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867334483292462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:48:54 multinode-168922 kubelet[2987]: E0828 17:48:54.483686    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867334483292462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:04 multinode-168922 kubelet[2987]: E0828 17:49:04.485274    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867344484756955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:04 multinode-168922 kubelet[2987]: E0828 17:49:04.485350    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867344484756955,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:14 multinode-168922 kubelet[2987]: E0828 17:49:14.491090    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867354486736637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:14 multinode-168922 kubelet[2987]: E0828 17:49:14.491114    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867354486736637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:24 multinode-168922 kubelet[2987]: E0828 17:49:24.496653    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867364492339264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:24 multinode-168922 kubelet[2987]: E0828 17:49:24.496701    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867364492339264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:34 multinode-168922 kubelet[2987]: E0828 17:49:34.498070    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867374497762160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:34 multinode-168922 kubelet[2987]: E0828 17:49:34.498113    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867374497762160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:34 multinode-168922 kubelet[2987]: E0828 17:49:34.498275    2987 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 17:49:34 multinode-168922 kubelet[2987]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 17:49:34 multinode-168922 kubelet[2987]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 17:49:34 multinode-168922 kubelet[2987]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:49:34 multinode-168922 kubelet[2987]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:49:44 multinode-168922 kubelet[2987]: E0828 17:49:44.500001    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867384499638202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:44 multinode-168922 kubelet[2987]: E0828 17:49:44.500324    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867384499638202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:54 multinode-168922 kubelet[2987]: E0828 17:49:54.502340    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867394501860769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:49:54 multinode-168922 kubelet[2987]: E0828 17:49:54.502392    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867394501860769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:50:04 multinode-168922 kubelet[2987]: E0828 17:50:04.506775    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867404506056011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:50:04 multinode-168922 kubelet[2987]: E0828 17:50:04.506823    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867404506056011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:50:14 multinode-168922 kubelet[2987]: E0828 17:50:14.508734    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867414508340256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:50:14 multinode-168922 kubelet[2987]: E0828 17:50:14.509175    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867414508340256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 17:50:19.850293   48613 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19529-10317/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-168922 -n multinode-168922
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-168922 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (324.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 stop
E0828 17:51:03.306940   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-168922 stop: exit status 82 (2m0.466797146s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-168922-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-168922 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-168922 status: exit status 3 (18.668027263s)

                                                
                                                
-- stdout --
	multinode-168922
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-168922-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 17:52:42.826593   49281 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.88:22: connect: no route to host
	E0828 17:52:42.826630   49281 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.88:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-168922 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-168922 -n multinode-168922
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-168922 logs -n 25: (1.368894646s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m02:/home/docker/cp-test.txt                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922:/home/docker/cp-test_multinode-168922-m02_multinode-168922.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n multinode-168922 sudo cat                                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /home/docker/cp-test_multinode-168922-m02_multinode-168922.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m02:/home/docker/cp-test.txt                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03:/home/docker/cp-test_multinode-168922-m02_multinode-168922-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n multinode-168922-m03 sudo cat                                   | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /home/docker/cp-test_multinode-168922-m02_multinode-168922-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp testdata/cp-test.txt                                                | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m03:/home/docker/cp-test.txt                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1181089229/001/cp-test_multinode-168922-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m03:/home/docker/cp-test.txt                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922:/home/docker/cp-test_multinode-168922-m03_multinode-168922.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n multinode-168922 sudo cat                                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /home/docker/cp-test_multinode-168922-m03_multinode-168922.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m03:/home/docker/cp-test.txt                       | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m02:/home/docker/cp-test_multinode-168922-m03_multinode-168922-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n multinode-168922-m02 sudo cat                                   | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /home/docker/cp-test_multinode-168922-m03_multinode-168922-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-168922 node stop m03                                                          | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	| node    | multinode-168922 node start                                                             | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-168922                                                                | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC |                     |
	| stop    | -p multinode-168922                                                                     | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC |                     |
	| start   | -p multinode-168922                                                                     | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:46 UTC | 28 Aug 24 17:50 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-168922                                                                | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:50 UTC |                     |
	| node    | multinode-168922 node delete                                                            | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:50 UTC | 28 Aug 24 17:50 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-168922 stop                                                                   | multinode-168922 | jenkins | v1.33.1 | 28 Aug 24 17:50 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 17:46:58
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 17:46:58.436940   47471 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:46:58.437053   47471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:46:58.437064   47471 out.go:358] Setting ErrFile to fd 2...
	I0828 17:46:58.437070   47471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:46:58.437265   47471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:46:58.437814   47471 out.go:352] Setting JSON to false
	I0828 17:46:58.438802   47471 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5364,"bootTime":1724861854,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 17:46:58.438858   47471 start.go:139] virtualization: kvm guest
	I0828 17:46:58.441284   47471 out.go:177] * [multinode-168922] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 17:46:58.442634   47471 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:46:58.442634   47471 notify.go:220] Checking for updates...
	I0828 17:46:58.445043   47471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:46:58.446463   47471 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:46:58.447673   47471 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:46:58.448936   47471 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 17:46:58.450380   47471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:46:58.452139   47471 config.go:182] Loaded profile config "multinode-168922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:46:58.452245   47471 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:46:58.452680   47471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:46:58.452728   47471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:46:58.468092   47471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38929
	I0828 17:46:58.468575   47471 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:46:58.469125   47471 main.go:141] libmachine: Using API Version  1
	I0828 17:46:58.469145   47471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:46:58.469474   47471 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:46:58.469657   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:46:58.506044   47471 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 17:46:58.507476   47471 start.go:297] selected driver: kvm2
	I0828 17:46:58.507496   47471 start.go:901] validating driver "kvm2" against &{Name:multinode-168922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-168922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:46:58.507662   47471 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:46:58.507999   47471 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:46:58.508085   47471 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 17:46:58.523919   47471 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 17:46:58.524766   47471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:46:58.524846   47471 cni.go:84] Creating CNI manager for ""
	I0828 17:46:58.524859   47471 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0828 17:46:58.524920   47471 start.go:340] cluster config:
	{Name:multinode-168922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-168922 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:46:58.525063   47471 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:46:58.526976   47471 out.go:177] * Starting "multinode-168922" primary control-plane node in "multinode-168922" cluster
	I0828 17:46:58.528229   47471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:46:58.528272   47471 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 17:46:58.528283   47471 cache.go:56] Caching tarball of preloaded images
	I0828 17:46:58.528399   47471 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 17:46:58.528413   47471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 17:46:58.528536   47471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/config.json ...
	I0828 17:46:58.528753   47471 start.go:360] acquireMachinesLock for multinode-168922: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 17:46:58.528799   47471 start.go:364] duration metric: took 25.856µs to acquireMachinesLock for "multinode-168922"
	I0828 17:46:58.528815   47471 start.go:96] Skipping create...Using existing machine configuration
	I0828 17:46:58.528821   47471 fix.go:54] fixHost starting: 
	I0828 17:46:58.529099   47471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:46:58.529129   47471 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:46:58.543997   47471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45985
	I0828 17:46:58.544366   47471 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:46:58.544847   47471 main.go:141] libmachine: Using API Version  1
	I0828 17:46:58.544866   47471 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:46:58.545188   47471 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:46:58.545468   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:46:58.545645   47471 main.go:141] libmachine: (multinode-168922) Calling .GetState
	I0828 17:46:58.547156   47471 fix.go:112] recreateIfNeeded on multinode-168922: state=Running err=<nil>
	W0828 17:46:58.547179   47471 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 17:46:58.549019   47471 out.go:177] * Updating the running kvm2 "multinode-168922" VM ...
	I0828 17:46:58.550163   47471 machine.go:93] provisionDockerMachine start ...
	I0828 17:46:58.550185   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:46:58.550409   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:58.552808   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.553164   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:58.553190   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.553335   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:46:58.553530   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.553677   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.553860   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:46:58.554028   47471 main.go:141] libmachine: Using SSH client type: native
	I0828 17:46:58.554353   47471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0828 17:46:58.554372   47471 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 17:46:58.658819   47471 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-168922
	
	I0828 17:46:58.658858   47471 main.go:141] libmachine: (multinode-168922) Calling .GetMachineName
	I0828 17:46:58.659107   47471 buildroot.go:166] provisioning hostname "multinode-168922"
	I0828 17:46:58.659131   47471 main.go:141] libmachine: (multinode-168922) Calling .GetMachineName
	I0828 17:46:58.659334   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:58.661702   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.662122   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:58.662152   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.662296   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:46:58.662472   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.662623   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.662749   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:46:58.662943   47471 main.go:141] libmachine: Using SSH client type: native
	I0828 17:46:58.663111   47471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0828 17:46:58.663123   47471 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-168922 && echo "multinode-168922" | sudo tee /etc/hostname
	I0828 17:46:58.786299   47471 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-168922
	
	I0828 17:46:58.786327   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:58.789197   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.789591   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:58.789613   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.789832   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:46:58.790017   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.790161   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:58.790286   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:46:58.790457   47471 main.go:141] libmachine: Using SSH client type: native
	I0828 17:46:58.790679   47471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0828 17:46:58.790696   47471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-168922' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-168922/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-168922' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:46:58.891247   47471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:46:58.891274   47471 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 17:46:58.891293   47471 buildroot.go:174] setting up certificates
	I0828 17:46:58.891304   47471 provision.go:84] configureAuth start
	I0828 17:46:58.891356   47471 main.go:141] libmachine: (multinode-168922) Calling .GetMachineName
	I0828 17:46:58.891664   47471 main.go:141] libmachine: (multinode-168922) Calling .GetIP
	I0828 17:46:58.894492   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.894980   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:58.895004   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.895082   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:58.897094   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.897481   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:58.897520   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:58.897673   47471 provision.go:143] copyHostCerts
	I0828 17:46:58.897701   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:46:58.897730   47471 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 17:46:58.897748   47471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:46:58.897818   47471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 17:46:58.897903   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:46:58.897925   47471 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 17:46:58.897932   47471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:46:58.897955   47471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 17:46:58.898008   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:46:58.898024   47471 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 17:46:58.898030   47471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:46:58.898050   47471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 17:46:58.898141   47471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.multinode-168922 san=[127.0.0.1 192.168.39.123 localhost minikube multinode-168922]
	I0828 17:46:59.098676   47471 provision.go:177] copyRemoteCerts
	I0828 17:46:59.098728   47471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:46:59.098750   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:59.101032   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:59.101386   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:59.101415   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:59.101560   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:46:59.101740   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:59.101864   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:46:59.102029   47471 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/multinode-168922/id_rsa Username:docker}
	I0828 17:46:59.181462   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0828 17:46:59.181550   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0828 17:46:59.205267   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0828 17:46:59.205335   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 17:46:59.229092   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0828 17:46:59.229182   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 17:46:59.253150   47471 provision.go:87] duration metric: took 361.83317ms to configureAuth
	I0828 17:46:59.253181   47471 buildroot.go:189] setting minikube options for container-runtime
	I0828 17:46:59.253432   47471 config.go:182] Loaded profile config "multinode-168922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:46:59.253523   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:46:59.256474   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:59.256858   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:46:59.256892   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:46:59.257099   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:46:59.257304   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:59.257504   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:46:59.257673   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:46:59.257876   47471 main.go:141] libmachine: Using SSH client type: native
	I0828 17:46:59.258034   47471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0828 17:46:59.258049   47471 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 17:48:30.119583   47471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 17:48:30.119616   47471 machine.go:96] duration metric: took 1m31.569436547s to provisionDockerMachine
	I0828 17:48:30.119642   47471 start.go:293] postStartSetup for "multinode-168922" (driver="kvm2")
	I0828 17:48:30.119656   47471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:48:30.119679   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:48:30.120064   47471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:48:30.120103   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:48:30.123818   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.124216   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:30.124269   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.124438   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:48:30.124608   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:48:30.124762   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:48:30.124868   47471 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/multinode-168922/id_rsa Username:docker}
	I0828 17:48:30.205881   47471 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:48:30.210062   47471 command_runner.go:130] > NAME=Buildroot
	I0828 17:48:30.210094   47471 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0828 17:48:30.210102   47471 command_runner.go:130] > ID=buildroot
	I0828 17:48:30.210109   47471 command_runner.go:130] > VERSION_ID=2023.02.9
	I0828 17:48:30.210115   47471 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0828 17:48:30.210165   47471 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 17:48:30.210179   47471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 17:48:30.210235   47471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 17:48:30.210321   47471 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 17:48:30.210343   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /etc/ssl/certs/175282.pem
	I0828 17:48:30.210429   47471 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 17:48:30.219637   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:48:30.244168   47471 start.go:296] duration metric: took 124.510825ms for postStartSetup
	I0828 17:48:30.244219   47471 fix.go:56] duration metric: took 1m31.715396636s for fixHost
	I0828 17:48:30.244246   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:48:30.246925   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.247344   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:30.247371   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.247465   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:48:30.247666   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:48:30.247809   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:48:30.247918   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:48:30.248079   47471 main.go:141] libmachine: Using SSH client type: native
	I0828 17:48:30.248253   47471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.123 22 <nil> <nil>}
	I0828 17:48:30.248265   47471 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 17:48:30.350880   47471 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724867310.314524877
	
	I0828 17:48:30.350904   47471 fix.go:216] guest clock: 1724867310.314524877
	I0828 17:48:30.350912   47471 fix.go:229] Guest: 2024-08-28 17:48:30.314524877 +0000 UTC Remote: 2024-08-28 17:48:30.244224825 +0000 UTC m=+91.841449922 (delta=70.300052ms)
	I0828 17:48:30.350930   47471 fix.go:200] guest clock delta is within tolerance: 70.300052ms
	I0828 17:48:30.350934   47471 start.go:83] releasing machines lock for "multinode-168922", held for 1m31.822124715s
	I0828 17:48:30.350968   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:48:30.351266   47471 main.go:141] libmachine: (multinode-168922) Calling .GetIP
	I0828 17:48:30.353955   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.354320   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:30.354351   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.354440   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:48:30.354945   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:48:30.355112   47471 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:48:30.355235   47471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:48:30.355294   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:48:30.355321   47471 ssh_runner.go:195] Run: cat /version.json
	I0828 17:48:30.355343   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:48:30.357796   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.357980   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.358157   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:30.358183   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.358349   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:48:30.358410   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:30.358439   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:30.358546   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:48:30.358594   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:48:30.358743   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:48:30.358775   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:48:30.358917   47471 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:48:30.358951   47471 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/multinode-168922/id_rsa Username:docker}
	I0828 17:48:30.359018   47471 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/multinode-168922/id_rsa Username:docker}
	I0828 17:48:30.434779   47471 command_runner.go:130] > {"iso_version": "v1.33.1-1724775098-19521", "kicbase_version": "v0.0.44-1724667927-19511", "minikube_version": "v1.33.1", "commit": "0d49494423856821e9b08161b42ba19c667a6f89"}
	I0828 17:48:30.435116   47471 ssh_runner.go:195] Run: systemctl --version
	I0828 17:48:30.470008   47471 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0828 17:48:30.470154   47471 command_runner.go:130] > systemd 252 (252)
	I0828 17:48:30.470244   47471 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0828 17:48:30.470317   47471 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 17:48:30.631308   47471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0828 17:48:30.637090   47471 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0828 17:48:30.637157   47471 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 17:48:30.637217   47471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:48:30.646934   47471 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0828 17:48:30.646959   47471 start.go:495] detecting cgroup driver to use...
	I0828 17:48:30.647017   47471 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 17:48:30.665907   47471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 17:48:30.680979   47471 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:48:30.681035   47471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:48:30.695635   47471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:48:30.710502   47471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:48:30.863556   47471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:48:31.004175   47471 docker.go:233] disabling docker service ...
	I0828 17:48:31.004251   47471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:48:31.019857   47471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:48:31.033094   47471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:48:31.170504   47471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:48:31.306022   47471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:48:31.319788   47471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:48:31.338173   47471 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0828 17:48:31.338214   47471 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 17:48:31.338275   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.348599   47471 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 17:48:31.348664   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.358530   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.368135   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.378221   47471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:48:31.388175   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.397767   47471 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.408337   47471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:48:31.417974   47471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:48:31.426446   47471 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0828 17:48:31.426687   47471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:48:31.435675   47471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:48:31.578667   47471 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 17:48:31.770169   47471 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 17:48:31.770240   47471 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 17:48:31.774558   47471 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0828 17:48:31.774582   47471 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0828 17:48:31.774589   47471 command_runner.go:130] > Device: 0,22	Inode: 1334        Links: 1
	I0828 17:48:31.774596   47471 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0828 17:48:31.774602   47471 command_runner.go:130] > Access: 2024-08-28 17:48:31.630924110 +0000
	I0828 17:48:31.774612   47471 command_runner.go:130] > Modify: 2024-08-28 17:48:31.630924110 +0000
	I0828 17:48:31.774620   47471 command_runner.go:130] > Change: 2024-08-28 17:48:31.630924110 +0000
	I0828 17:48:31.774627   47471 command_runner.go:130] >  Birth: -
	I0828 17:48:31.774663   47471 start.go:563] Will wait 60s for crictl version
	I0828 17:48:31.774726   47471 ssh_runner.go:195] Run: which crictl
	I0828 17:48:31.778179   47471 command_runner.go:130] > /usr/bin/crictl
	I0828 17:48:31.778238   47471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:48:31.815871   47471 command_runner.go:130] > Version:  0.1.0
	I0828 17:48:31.815890   47471 command_runner.go:130] > RuntimeName:  cri-o
	I0828 17:48:31.815895   47471 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0828 17:48:31.815900   47471 command_runner.go:130] > RuntimeApiVersion:  v1
	I0828 17:48:31.815984   47471 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 17:48:31.816044   47471 ssh_runner.go:195] Run: crio --version
	I0828 17:48:31.841405   47471 command_runner.go:130] > crio version 1.29.1
	I0828 17:48:31.841431   47471 command_runner.go:130] > Version:        1.29.1
	I0828 17:48:31.841441   47471 command_runner.go:130] > GitCommit:      unknown
	I0828 17:48:31.841448   47471 command_runner.go:130] > GitCommitDate:  unknown
	I0828 17:48:31.841454   47471 command_runner.go:130] > GitTreeState:   clean
	I0828 17:48:31.841463   47471 command_runner.go:130] > BuildDate:      2024-08-27T21:29:17Z
	I0828 17:48:31.841468   47471 command_runner.go:130] > GoVersion:      go1.21.6
	I0828 17:48:31.841473   47471 command_runner.go:130] > Compiler:       gc
	I0828 17:48:31.841477   47471 command_runner.go:130] > Platform:       linux/amd64
	I0828 17:48:31.841482   47471 command_runner.go:130] > Linkmode:       dynamic
	I0828 17:48:31.841487   47471 command_runner.go:130] > BuildTags:      
	I0828 17:48:31.841491   47471 command_runner.go:130] >   containers_image_ostree_stub
	I0828 17:48:31.841496   47471 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0828 17:48:31.841503   47471 command_runner.go:130] >   btrfs_noversion
	I0828 17:48:31.841509   47471 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0828 17:48:31.841513   47471 command_runner.go:130] >   libdm_no_deferred_remove
	I0828 17:48:31.841517   47471 command_runner.go:130] >   seccomp
	I0828 17:48:31.841521   47471 command_runner.go:130] > LDFlags:          unknown
	I0828 17:48:31.841527   47471 command_runner.go:130] > SeccompEnabled:   true
	I0828 17:48:31.841531   47471 command_runner.go:130] > AppArmorEnabled:  false
	I0828 17:48:31.842844   47471 ssh_runner.go:195] Run: crio --version
	I0828 17:48:31.869381   47471 command_runner.go:130] > crio version 1.29.1
	I0828 17:48:31.869406   47471 command_runner.go:130] > Version:        1.29.1
	I0828 17:48:31.869414   47471 command_runner.go:130] > GitCommit:      unknown
	I0828 17:48:31.869420   47471 command_runner.go:130] > GitCommitDate:  unknown
	I0828 17:48:31.869427   47471 command_runner.go:130] > GitTreeState:   clean
	I0828 17:48:31.869439   47471 command_runner.go:130] > BuildDate:      2024-08-27T21:29:17Z
	I0828 17:48:31.869446   47471 command_runner.go:130] > GoVersion:      go1.21.6
	I0828 17:48:31.869452   47471 command_runner.go:130] > Compiler:       gc
	I0828 17:48:31.869460   47471 command_runner.go:130] > Platform:       linux/amd64
	I0828 17:48:31.869467   47471 command_runner.go:130] > Linkmode:       dynamic
	I0828 17:48:31.869479   47471 command_runner.go:130] > BuildTags:      
	I0828 17:48:31.869486   47471 command_runner.go:130] >   containers_image_ostree_stub
	I0828 17:48:31.869494   47471 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0828 17:48:31.869503   47471 command_runner.go:130] >   btrfs_noversion
	I0828 17:48:31.869510   47471 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0828 17:48:31.869515   47471 command_runner.go:130] >   libdm_no_deferred_remove
	I0828 17:48:31.869519   47471 command_runner.go:130] >   seccomp
	I0828 17:48:31.869524   47471 command_runner.go:130] > LDFlags:          unknown
	I0828 17:48:31.869528   47471 command_runner.go:130] > SeccompEnabled:   true
	I0828 17:48:31.869532   47471 command_runner.go:130] > AppArmorEnabled:  false
	I0828 17:48:31.871479   47471 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 17:48:31.872854   47471 main.go:141] libmachine: (multinode-168922) Calling .GetIP
	I0828 17:48:31.875663   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:31.876041   47471 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:48:31.876070   47471 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:48:31.876309   47471 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 17:48:31.880099   47471 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0828 17:48:31.880211   47471 kubeadm.go:883] updating cluster {Name:multinode-168922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-168922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 17:48:31.880353   47471 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 17:48:31.880410   47471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:48:31.920788   47471 command_runner.go:130] > {
	I0828 17:48:31.920815   47471 command_runner.go:130] >   "images": [
	I0828 17:48:31.920822   47471 command_runner.go:130] >     {
	I0828 17:48:31.920834   47471 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0828 17:48:31.920841   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.920852   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0828 17:48:31.920856   47471 command_runner.go:130] >       ],
	I0828 17:48:31.920860   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.920869   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0828 17:48:31.920876   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0828 17:48:31.920879   47471 command_runner.go:130] >       ],
	I0828 17:48:31.920884   47471 command_runner.go:130] >       "size": "87165492",
	I0828 17:48:31.920888   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.920892   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.920899   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.920906   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.920919   47471 command_runner.go:130] >     },
	I0828 17:48:31.920927   47471 command_runner.go:130] >     {
	I0828 17:48:31.920938   47471 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0828 17:48:31.920945   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.920953   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0828 17:48:31.920958   47471 command_runner.go:130] >       ],
	I0828 17:48:31.920962   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.920969   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0828 17:48:31.920979   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0828 17:48:31.920983   47471 command_runner.go:130] >       ],
	I0828 17:48:31.920988   47471 command_runner.go:130] >       "size": "87190579",
	I0828 17:48:31.920992   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.920998   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921004   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921008   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921011   47471 command_runner.go:130] >     },
	I0828 17:48:31.921015   47471 command_runner.go:130] >     {
	I0828 17:48:31.921021   47471 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0828 17:48:31.921026   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921031   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0828 17:48:31.921035   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921039   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921046   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0828 17:48:31.921056   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0828 17:48:31.921060   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921065   47471 command_runner.go:130] >       "size": "1363676",
	I0828 17:48:31.921069   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.921073   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921078   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921081   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921085   47471 command_runner.go:130] >     },
	I0828 17:48:31.921090   47471 command_runner.go:130] >     {
	I0828 17:48:31.921095   47471 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0828 17:48:31.921099   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921105   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0828 17:48:31.921109   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921113   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921128   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0828 17:48:31.921162   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0828 17:48:31.921170   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921175   47471 command_runner.go:130] >       "size": "31470524",
	I0828 17:48:31.921178   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.921182   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921185   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921189   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921193   47471 command_runner.go:130] >     },
	I0828 17:48:31.921196   47471 command_runner.go:130] >     {
	I0828 17:48:31.921202   47471 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0828 17:48:31.921208   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921213   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0828 17:48:31.921217   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921221   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921230   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0828 17:48:31.921240   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0828 17:48:31.921244   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921248   47471 command_runner.go:130] >       "size": "61245718",
	I0828 17:48:31.921254   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.921258   47471 command_runner.go:130] >       "username": "nonroot",
	I0828 17:48:31.921264   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921268   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921273   47471 command_runner.go:130] >     },
	I0828 17:48:31.921278   47471 command_runner.go:130] >     {
	I0828 17:48:31.921284   47471 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0828 17:48:31.921290   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921295   47471 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0828 17:48:31.921300   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921314   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921321   47471 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0828 17:48:31.921328   47471 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0828 17:48:31.921332   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921336   47471 command_runner.go:130] >       "size": "149009664",
	I0828 17:48:31.921340   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.921343   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.921349   47471 command_runner.go:130] >       },
	I0828 17:48:31.921353   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921357   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921363   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921369   47471 command_runner.go:130] >     },
	I0828 17:48:31.921373   47471 command_runner.go:130] >     {
	I0828 17:48:31.921383   47471 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0828 17:48:31.921387   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921394   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0828 17:48:31.921398   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921401   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921409   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0828 17:48:31.921418   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0828 17:48:31.921421   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921426   47471 command_runner.go:130] >       "size": "95233506",
	I0828 17:48:31.921430   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.921434   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.921440   47471 command_runner.go:130] >       },
	I0828 17:48:31.921444   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921448   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921454   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921457   47471 command_runner.go:130] >     },
	I0828 17:48:31.921461   47471 command_runner.go:130] >     {
	I0828 17:48:31.921469   47471 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0828 17:48:31.921473   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921478   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0828 17:48:31.921483   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921487   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921506   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0828 17:48:31.921516   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0828 17:48:31.921519   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921524   47471 command_runner.go:130] >       "size": "89437512",
	I0828 17:48:31.921529   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.921533   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.921536   47471 command_runner.go:130] >       },
	I0828 17:48:31.921540   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921545   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921551   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921554   47471 command_runner.go:130] >     },
	I0828 17:48:31.921558   47471 command_runner.go:130] >     {
	I0828 17:48:31.921564   47471 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0828 17:48:31.921567   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921572   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0828 17:48:31.921575   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921579   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921586   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0828 17:48:31.921592   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0828 17:48:31.921596   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921599   47471 command_runner.go:130] >       "size": "92728217",
	I0828 17:48:31.921603   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.921607   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921611   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921622   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921626   47471 command_runner.go:130] >     },
	I0828 17:48:31.921629   47471 command_runner.go:130] >     {
	I0828 17:48:31.921635   47471 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0828 17:48:31.921641   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921646   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0828 17:48:31.921652   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921656   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921665   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0828 17:48:31.921672   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0828 17:48:31.921678   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921682   47471 command_runner.go:130] >       "size": "68420936",
	I0828 17:48:31.921685   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.921689   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.921693   47471 command_runner.go:130] >       },
	I0828 17:48:31.921697   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921702   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921705   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.921709   47471 command_runner.go:130] >     },
	I0828 17:48:31.921712   47471 command_runner.go:130] >     {
	I0828 17:48:31.921720   47471 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0828 17:48:31.921726   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.921730   47471 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0828 17:48:31.921734   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921740   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.921747   47471 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0828 17:48:31.921756   47471 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0828 17:48:31.921760   47471 command_runner.go:130] >       ],
	I0828 17:48:31.921764   47471 command_runner.go:130] >       "size": "742080",
	I0828 17:48:31.921767   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.921772   47471 command_runner.go:130] >         "value": "65535"
	I0828 17:48:31.921775   47471 command_runner.go:130] >       },
	I0828 17:48:31.921779   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.921783   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.921787   47471 command_runner.go:130] >       "pinned": true
	I0828 17:48:31.921793   47471 command_runner.go:130] >     }
	I0828 17:48:31.921797   47471 command_runner.go:130] >   ]
	I0828 17:48:31.921801   47471 command_runner.go:130] > }
	I0828 17:48:31.921978   47471 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 17:48:31.921990   47471 crio.go:433] Images already preloaded, skipping extraction
	I0828 17:48:31.922038   47471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:48:31.953543   47471 command_runner.go:130] > {
	I0828 17:48:31.953590   47471 command_runner.go:130] >   "images": [
	I0828 17:48:31.953596   47471 command_runner.go:130] >     {
	I0828 17:48:31.953604   47471 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0828 17:48:31.953609   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.953615   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0828 17:48:31.953618   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953622   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.953629   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0828 17:48:31.953636   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0828 17:48:31.953640   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953645   47471 command_runner.go:130] >       "size": "87165492",
	I0828 17:48:31.953649   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.953653   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.953660   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.953680   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.953686   47471 command_runner.go:130] >     },
	I0828 17:48:31.953689   47471 command_runner.go:130] >     {
	I0828 17:48:31.953695   47471 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0828 17:48:31.953707   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.953712   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0828 17:48:31.953716   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953720   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.953727   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0828 17:48:31.953735   47471 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0828 17:48:31.953738   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953749   47471 command_runner.go:130] >       "size": "87190579",
	I0828 17:48:31.953762   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.953768   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.953773   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.953777   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.953780   47471 command_runner.go:130] >     },
	I0828 17:48:31.953785   47471 command_runner.go:130] >     {
	I0828 17:48:31.953790   47471 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0828 17:48:31.953794   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.953800   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0828 17:48:31.953804   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953808   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.953816   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0828 17:48:31.953823   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0828 17:48:31.953827   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953831   47471 command_runner.go:130] >       "size": "1363676",
	I0828 17:48:31.953834   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.953839   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.953847   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.953850   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.953854   47471 command_runner.go:130] >     },
	I0828 17:48:31.953857   47471 command_runner.go:130] >     {
	I0828 17:48:31.953863   47471 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0828 17:48:31.953872   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.953876   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0828 17:48:31.953880   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953886   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.953893   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0828 17:48:31.953903   47471 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0828 17:48:31.953910   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953914   47471 command_runner.go:130] >       "size": "31470524",
	I0828 17:48:31.953917   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.953922   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.953925   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.953929   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.953933   47471 command_runner.go:130] >     },
	I0828 17:48:31.953936   47471 command_runner.go:130] >     {
	I0828 17:48:31.953948   47471 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0828 17:48:31.953955   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.953960   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0828 17:48:31.953966   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953969   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.953979   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0828 17:48:31.953986   47471 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0828 17:48:31.953991   47471 command_runner.go:130] >       ],
	I0828 17:48:31.953995   47471 command_runner.go:130] >       "size": "61245718",
	I0828 17:48:31.953999   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.954003   47471 command_runner.go:130] >       "username": "nonroot",
	I0828 17:48:31.954007   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954011   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954014   47471 command_runner.go:130] >     },
	I0828 17:48:31.954017   47471 command_runner.go:130] >     {
	I0828 17:48:31.954023   47471 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0828 17:48:31.954029   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954033   47471 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0828 17:48:31.954037   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954041   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954048   47471 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0828 17:48:31.954056   47471 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0828 17:48:31.954059   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954063   47471 command_runner.go:130] >       "size": "149009664",
	I0828 17:48:31.954069   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.954088   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.954100   47471 command_runner.go:130] >       },
	I0828 17:48:31.954105   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954114   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954118   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954122   47471 command_runner.go:130] >     },
	I0828 17:48:31.954126   47471 command_runner.go:130] >     {
	I0828 17:48:31.954134   47471 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0828 17:48:31.954138   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954146   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0828 17:48:31.954149   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954158   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954168   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0828 17:48:31.954175   47471 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0828 17:48:31.954180   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954184   47471 command_runner.go:130] >       "size": "95233506",
	I0828 17:48:31.954188   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.954195   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.954198   47471 command_runner.go:130] >       },
	I0828 17:48:31.954202   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954208   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954212   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954218   47471 command_runner.go:130] >     },
	I0828 17:48:31.954221   47471 command_runner.go:130] >     {
	I0828 17:48:31.954226   47471 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0828 17:48:31.954232   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954237   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0828 17:48:31.954240   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954244   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954265   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0828 17:48:31.954275   47471 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0828 17:48:31.954279   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954283   47471 command_runner.go:130] >       "size": "89437512",
	I0828 17:48:31.954286   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.954290   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.954293   47471 command_runner.go:130] >       },
	I0828 17:48:31.954421   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954424   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954427   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954431   47471 command_runner.go:130] >     },
	I0828 17:48:31.954434   47471 command_runner.go:130] >     {
	I0828 17:48:31.954442   47471 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0828 17:48:31.954448   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954453   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0828 17:48:31.954456   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954462   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954469   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0828 17:48:31.954487   47471 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0828 17:48:31.954493   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954497   47471 command_runner.go:130] >       "size": "92728217",
	I0828 17:48:31.954501   47471 command_runner.go:130] >       "uid": null,
	I0828 17:48:31.954504   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954508   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954514   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954518   47471 command_runner.go:130] >     },
	I0828 17:48:31.954523   47471 command_runner.go:130] >     {
	I0828 17:48:31.954529   47471 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0828 17:48:31.954535   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954539   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0828 17:48:31.954543   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954547   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954554   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0828 17:48:31.954563   47471 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0828 17:48:31.954566   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954570   47471 command_runner.go:130] >       "size": "68420936",
	I0828 17:48:31.954574   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.954577   47471 command_runner.go:130] >         "value": "0"
	I0828 17:48:31.954589   47471 command_runner.go:130] >       },
	I0828 17:48:31.954593   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954598   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954602   47471 command_runner.go:130] >       "pinned": false
	I0828 17:48:31.954608   47471 command_runner.go:130] >     },
	I0828 17:48:31.954611   47471 command_runner.go:130] >     {
	I0828 17:48:31.954617   47471 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0828 17:48:31.954622   47471 command_runner.go:130] >       "repoTags": [
	I0828 17:48:31.954627   47471 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0828 17:48:31.954630   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954634   47471 command_runner.go:130] >       "repoDigests": [
	I0828 17:48:31.954641   47471 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0828 17:48:31.954650   47471 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0828 17:48:31.954653   47471 command_runner.go:130] >       ],
	I0828 17:48:31.954657   47471 command_runner.go:130] >       "size": "742080",
	I0828 17:48:31.954661   47471 command_runner.go:130] >       "uid": {
	I0828 17:48:31.954671   47471 command_runner.go:130] >         "value": "65535"
	I0828 17:48:31.954677   47471 command_runner.go:130] >       },
	I0828 17:48:31.954681   47471 command_runner.go:130] >       "username": "",
	I0828 17:48:31.954685   47471 command_runner.go:130] >       "spec": null,
	I0828 17:48:31.954689   47471 command_runner.go:130] >       "pinned": true
	I0828 17:48:31.954693   47471 command_runner.go:130] >     }
	I0828 17:48:31.954698   47471 command_runner.go:130] >   ]
	I0828 17:48:31.954701   47471 command_runner.go:130] > }
	I0828 17:48:31.955005   47471 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 17:48:31.955019   47471 cache_images.go:84] Images are preloaded, skipping loading
	I0828 17:48:31.955027   47471 kubeadm.go:934] updating node { 192.168.39.123 8443 v1.31.0 crio true true} ...
	I0828 17:48:31.955121   47471 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-168922 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-168922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:48:31.955183   47471 ssh_runner.go:195] Run: crio config
	I0828 17:48:31.987357   47471 command_runner.go:130] ! time="2024-08-28 17:48:31.951058285Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0828 17:48:31.993006   47471 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0828 17:48:31.998146   47471 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0828 17:48:31.998165   47471 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0828 17:48:31.998175   47471 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0828 17:48:31.998180   47471 command_runner.go:130] > #
	I0828 17:48:31.998191   47471 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0828 17:48:31.998203   47471 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0828 17:48:31.998211   47471 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0828 17:48:31.998231   47471 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0828 17:48:31.998241   47471 command_runner.go:130] > # reload'.
	I0828 17:48:31.998251   47471 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0828 17:48:31.998263   47471 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0828 17:48:31.998275   47471 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0828 17:48:31.998284   47471 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0828 17:48:31.998288   47471 command_runner.go:130] > [crio]
	I0828 17:48:31.998297   47471 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0828 17:48:31.998304   47471 command_runner.go:130] > # containers images, in this directory.
	I0828 17:48:31.998309   47471 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0828 17:48:31.998319   47471 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0828 17:48:31.998332   47471 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0828 17:48:31.998346   47471 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0828 17:48:31.998354   47471 command_runner.go:130] > # imagestore = ""
	I0828 17:48:31.998360   47471 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0828 17:48:31.998368   47471 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0828 17:48:31.998372   47471 command_runner.go:130] > storage_driver = "overlay"
	I0828 17:48:31.998380   47471 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0828 17:48:31.998386   47471 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0828 17:48:31.998392   47471 command_runner.go:130] > storage_option = [
	I0828 17:48:31.998397   47471 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0828 17:48:31.998400   47471 command_runner.go:130] > ]
	I0828 17:48:31.998406   47471 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0828 17:48:31.998414   47471 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0828 17:48:31.998418   47471 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0828 17:48:31.998425   47471 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0828 17:48:31.998431   47471 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0828 17:48:31.998437   47471 command_runner.go:130] > # always happen on a node reboot
	I0828 17:48:31.998442   47471 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0828 17:48:31.998455   47471 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0828 17:48:31.998463   47471 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0828 17:48:31.998469   47471 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0828 17:48:31.998477   47471 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0828 17:48:31.998483   47471 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0828 17:48:31.998493   47471 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0828 17:48:31.998497   47471 command_runner.go:130] > # internal_wipe = true
	I0828 17:48:31.998505   47471 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0828 17:48:31.998512   47471 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0828 17:48:31.998516   47471 command_runner.go:130] > # internal_repair = false
	I0828 17:48:31.998524   47471 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0828 17:48:31.998530   47471 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0828 17:48:31.998538   47471 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0828 17:48:31.998543   47471 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0828 17:48:31.998551   47471 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0828 17:48:31.998555   47471 command_runner.go:130] > [crio.api]
	I0828 17:48:31.998560   47471 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0828 17:48:31.998565   47471 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0828 17:48:31.998577   47471 command_runner.go:130] > # IP address on which the stream server will listen.
	I0828 17:48:31.998591   47471 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0828 17:48:31.998597   47471 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0828 17:48:31.998601   47471 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0828 17:48:31.998606   47471 command_runner.go:130] > # stream_port = "0"
	I0828 17:48:31.998611   47471 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0828 17:48:31.998617   47471 command_runner.go:130] > # stream_enable_tls = false
	I0828 17:48:31.998622   47471 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0828 17:48:31.998628   47471 command_runner.go:130] > # stream_idle_timeout = ""
	I0828 17:48:31.998637   47471 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0828 17:48:31.998645   47471 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0828 17:48:31.998648   47471 command_runner.go:130] > # minutes.
	I0828 17:48:31.998653   47471 command_runner.go:130] > # stream_tls_cert = ""
	I0828 17:48:31.998658   47471 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0828 17:48:31.998666   47471 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0828 17:48:31.998670   47471 command_runner.go:130] > # stream_tls_key = ""
	I0828 17:48:31.998675   47471 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0828 17:48:31.998683   47471 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0828 17:48:31.998702   47471 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0828 17:48:31.998708   47471 command_runner.go:130] > # stream_tls_ca = ""
	I0828 17:48:31.998715   47471 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0828 17:48:31.998722   47471 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0828 17:48:31.998729   47471 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0828 17:48:31.998735   47471 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0828 17:48:31.998740   47471 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0828 17:48:31.998748   47471 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0828 17:48:31.998751   47471 command_runner.go:130] > [crio.runtime]
	I0828 17:48:31.998759   47471 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0828 17:48:31.998764   47471 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0828 17:48:31.998770   47471 command_runner.go:130] > # "nofile=1024:2048"
	I0828 17:48:31.998792   47471 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0828 17:48:31.998802   47471 command_runner.go:130] > # default_ulimits = [
	I0828 17:48:31.998805   47471 command_runner.go:130] > # ]
	I0828 17:48:31.998811   47471 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0828 17:48:31.998817   47471 command_runner.go:130] > # no_pivot = false
	I0828 17:48:31.998822   47471 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0828 17:48:31.998836   47471 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0828 17:48:31.998843   47471 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0828 17:48:31.998849   47471 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0828 17:48:31.998856   47471 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0828 17:48:31.998862   47471 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0828 17:48:31.998868   47471 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0828 17:48:31.998873   47471 command_runner.go:130] > # Cgroup setting for conmon
	I0828 17:48:31.998881   47471 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0828 17:48:31.998885   47471 command_runner.go:130] > conmon_cgroup = "pod"
	I0828 17:48:31.998892   47471 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0828 17:48:31.998897   47471 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0828 17:48:31.998907   47471 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0828 17:48:31.998913   47471 command_runner.go:130] > conmon_env = [
	I0828 17:48:31.998918   47471 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0828 17:48:31.998924   47471 command_runner.go:130] > ]
	I0828 17:48:31.998929   47471 command_runner.go:130] > # Additional environment variables to set for all the
	I0828 17:48:31.998936   47471 command_runner.go:130] > # containers. These are overridden if set in the
	I0828 17:48:31.998942   47471 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0828 17:48:31.998946   47471 command_runner.go:130] > # default_env = [
	I0828 17:48:31.998949   47471 command_runner.go:130] > # ]
	I0828 17:48:31.998955   47471 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0828 17:48:31.998963   47471 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0828 17:48:31.998967   47471 command_runner.go:130] > # selinux = false
	I0828 17:48:31.998975   47471 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0828 17:48:31.998981   47471 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0828 17:48:31.998989   47471 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0828 17:48:31.998993   47471 command_runner.go:130] > # seccomp_profile = ""
	I0828 17:48:31.999000   47471 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0828 17:48:31.999006   47471 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0828 17:48:31.999013   47471 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0828 17:48:31.999018   47471 command_runner.go:130] > # which might increase security.
	I0828 17:48:31.999026   47471 command_runner.go:130] > # This option is currently deprecated,
	I0828 17:48:31.999031   47471 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0828 17:48:31.999036   47471 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0828 17:48:31.999047   47471 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0828 17:48:31.999056   47471 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0828 17:48:31.999066   47471 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0828 17:48:31.999074   47471 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0828 17:48:31.999079   47471 command_runner.go:130] > # This option supports live configuration reload.
	I0828 17:48:31.999084   47471 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0828 17:48:31.999090   47471 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0828 17:48:31.999096   47471 command_runner.go:130] > # the cgroup blockio controller.
	I0828 17:48:31.999100   47471 command_runner.go:130] > # blockio_config_file = ""
	I0828 17:48:31.999106   47471 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0828 17:48:31.999112   47471 command_runner.go:130] > # blockio parameters.
	I0828 17:48:31.999116   47471 command_runner.go:130] > # blockio_reload = false
	I0828 17:48:31.999125   47471 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0828 17:48:31.999131   47471 command_runner.go:130] > # irqbalance daemon.
	I0828 17:48:31.999138   47471 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0828 17:48:31.999148   47471 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0828 17:48:31.999155   47471 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0828 17:48:31.999163   47471 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0828 17:48:31.999169   47471 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0828 17:48:31.999177   47471 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0828 17:48:31.999182   47471 command_runner.go:130] > # This option supports live configuration reload.
	I0828 17:48:31.999188   47471 command_runner.go:130] > # rdt_config_file = ""
	I0828 17:48:31.999195   47471 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0828 17:48:31.999201   47471 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0828 17:48:31.999246   47471 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0828 17:48:31.999255   47471 command_runner.go:130] > # separate_pull_cgroup = ""
	I0828 17:48:31.999260   47471 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0828 17:48:31.999266   47471 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0828 17:48:31.999270   47471 command_runner.go:130] > # will be added.
	I0828 17:48:31.999274   47471 command_runner.go:130] > # default_capabilities = [
	I0828 17:48:31.999277   47471 command_runner.go:130] > # 	"CHOWN",
	I0828 17:48:31.999281   47471 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0828 17:48:31.999285   47471 command_runner.go:130] > # 	"FSETID",
	I0828 17:48:31.999288   47471 command_runner.go:130] > # 	"FOWNER",
	I0828 17:48:31.999292   47471 command_runner.go:130] > # 	"SETGID",
	I0828 17:48:31.999298   47471 command_runner.go:130] > # 	"SETUID",
	I0828 17:48:31.999302   47471 command_runner.go:130] > # 	"SETPCAP",
	I0828 17:48:31.999306   47471 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0828 17:48:31.999316   47471 command_runner.go:130] > # 	"KILL",
	I0828 17:48:31.999321   47471 command_runner.go:130] > # ]
	I0828 17:48:31.999329   47471 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0828 17:48:31.999338   47471 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0828 17:48:31.999344   47471 command_runner.go:130] > # add_inheritable_capabilities = false
	I0828 17:48:31.999353   47471 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0828 17:48:31.999360   47471 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0828 17:48:31.999366   47471 command_runner.go:130] > default_sysctls = [
	I0828 17:48:31.999371   47471 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0828 17:48:31.999376   47471 command_runner.go:130] > ]
	I0828 17:48:31.999380   47471 command_runner.go:130] > # List of devices on the host that a
	I0828 17:48:31.999386   47471 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0828 17:48:31.999393   47471 command_runner.go:130] > # allowed_devices = [
	I0828 17:48:31.999397   47471 command_runner.go:130] > # 	"/dev/fuse",
	I0828 17:48:31.999402   47471 command_runner.go:130] > # ]
	I0828 17:48:31.999406   47471 command_runner.go:130] > # List of additional devices. specified as
	I0828 17:48:31.999413   47471 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0828 17:48:31.999420   47471 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0828 17:48:31.999428   47471 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0828 17:48:31.999434   47471 command_runner.go:130] > # additional_devices = [
	I0828 17:48:31.999437   47471 command_runner.go:130] > # ]
	I0828 17:48:31.999446   47471 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0828 17:48:31.999450   47471 command_runner.go:130] > # cdi_spec_dirs = [
	I0828 17:48:31.999453   47471 command_runner.go:130] > # 	"/etc/cdi",
	I0828 17:48:31.999457   47471 command_runner.go:130] > # 	"/var/run/cdi",
	I0828 17:48:31.999460   47471 command_runner.go:130] > # ]
	I0828 17:48:31.999466   47471 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0828 17:48:31.999473   47471 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0828 17:48:31.999479   47471 command_runner.go:130] > # Defaults to false.
	I0828 17:48:31.999485   47471 command_runner.go:130] > # device_ownership_from_security_context = false
	I0828 17:48:31.999491   47471 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0828 17:48:31.999499   47471 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0828 17:48:31.999503   47471 command_runner.go:130] > # hooks_dir = [
	I0828 17:48:31.999509   47471 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0828 17:48:31.999512   47471 command_runner.go:130] > # ]
	I0828 17:48:31.999518   47471 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0828 17:48:31.999530   47471 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0828 17:48:31.999538   47471 command_runner.go:130] > # its default mounts from the following two files:
	I0828 17:48:31.999541   47471 command_runner.go:130] > #
	I0828 17:48:31.999546   47471 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0828 17:48:31.999555   47471 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0828 17:48:31.999560   47471 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0828 17:48:31.999565   47471 command_runner.go:130] > #
	I0828 17:48:31.999570   47471 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0828 17:48:31.999581   47471 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0828 17:48:31.999590   47471 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0828 17:48:31.999595   47471 command_runner.go:130] > #      only add mounts it finds in this file.
	I0828 17:48:31.999600   47471 command_runner.go:130] > #
	I0828 17:48:31.999604   47471 command_runner.go:130] > # default_mounts_file = ""
	I0828 17:48:31.999609   47471 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0828 17:48:31.999617   47471 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0828 17:48:31.999622   47471 command_runner.go:130] > pids_limit = 1024
	I0828 17:48:31.999630   47471 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0828 17:48:31.999638   47471 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0828 17:48:31.999644   47471 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0828 17:48:31.999653   47471 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0828 17:48:31.999657   47471 command_runner.go:130] > # log_size_max = -1
	I0828 17:48:31.999663   47471 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0828 17:48:31.999673   47471 command_runner.go:130] > # log_to_journald = false
	I0828 17:48:31.999679   47471 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0828 17:48:31.999686   47471 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0828 17:48:31.999690   47471 command_runner.go:130] > # Path to directory for container attach sockets.
	I0828 17:48:31.999695   47471 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0828 17:48:31.999701   47471 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0828 17:48:31.999705   47471 command_runner.go:130] > # bind_mount_prefix = ""
	I0828 17:48:31.999710   47471 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0828 17:48:31.999715   47471 command_runner.go:130] > # read_only = false
	I0828 17:48:31.999721   47471 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0828 17:48:31.999728   47471 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0828 17:48:31.999733   47471 command_runner.go:130] > # live configuration reload.
	I0828 17:48:31.999738   47471 command_runner.go:130] > # log_level = "info"
	I0828 17:48:31.999744   47471 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0828 17:48:31.999755   47471 command_runner.go:130] > # This option supports live configuration reload.
	I0828 17:48:31.999762   47471 command_runner.go:130] > # log_filter = ""
	I0828 17:48:31.999767   47471 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0828 17:48:31.999776   47471 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0828 17:48:31.999779   47471 command_runner.go:130] > # separated by comma.
	I0828 17:48:31.999786   47471 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0828 17:48:31.999792   47471 command_runner.go:130] > # uid_mappings = ""
	I0828 17:48:31.999798   47471 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0828 17:48:31.999805   47471 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0828 17:48:31.999810   47471 command_runner.go:130] > # separated by comma.
	I0828 17:48:31.999819   47471 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0828 17:48:31.999823   47471 command_runner.go:130] > # gid_mappings = ""
	I0828 17:48:31.999831   47471 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0828 17:48:31.999836   47471 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0828 17:48:31.999844   47471 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0828 17:48:31.999851   47471 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0828 17:48:31.999857   47471 command_runner.go:130] > # minimum_mappable_uid = -1
	I0828 17:48:31.999863   47471 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0828 17:48:31.999869   47471 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0828 17:48:31.999875   47471 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0828 17:48:31.999883   47471 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0828 17:48:31.999891   47471 command_runner.go:130] > # minimum_mappable_gid = -1
	I0828 17:48:31.999897   47471 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0828 17:48:31.999903   47471 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0828 17:48:31.999908   47471 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0828 17:48:31.999914   47471 command_runner.go:130] > # ctr_stop_timeout = 30
	I0828 17:48:31.999920   47471 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0828 17:48:31.999927   47471 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0828 17:48:31.999932   47471 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0828 17:48:31.999939   47471 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0828 17:48:31.999943   47471 command_runner.go:130] > drop_infra_ctr = false
	I0828 17:48:31.999951   47471 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0828 17:48:31.999956   47471 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0828 17:48:31.999965   47471 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0828 17:48:31.999969   47471 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0828 17:48:31.999976   47471 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0828 17:48:31.999987   47471 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0828 17:48:31.999995   47471 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0828 17:48:32.000000   47471 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0828 17:48:32.000007   47471 command_runner.go:130] > # shared_cpuset = ""
	I0828 17:48:32.000013   47471 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0828 17:48:32.000020   47471 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0828 17:48:32.000024   47471 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0828 17:48:32.000031   47471 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0828 17:48:32.000037   47471 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0828 17:48:32.000042   47471 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0828 17:48:32.000049   47471 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0828 17:48:32.000055   47471 command_runner.go:130] > # enable_criu_support = false
	I0828 17:48:32.000060   47471 command_runner.go:130] > # Enable/disable the generation of the container,
	I0828 17:48:32.000068   47471 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0828 17:48:32.000072   47471 command_runner.go:130] > # enable_pod_events = false
	I0828 17:48:32.000080   47471 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0828 17:48:32.000086   47471 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0828 17:48:32.000093   47471 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0828 17:48:32.000097   47471 command_runner.go:130] > # default_runtime = "runc"
	I0828 17:48:32.000103   47471 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0828 17:48:32.000109   47471 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0828 17:48:32.000120   47471 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0828 17:48:32.000129   47471 command_runner.go:130] > # creation as a file is not desired either.
	I0828 17:48:32.000137   47471 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0828 17:48:32.000144   47471 command_runner.go:130] > # the hostname is being managed dynamically.
	I0828 17:48:32.000149   47471 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0828 17:48:32.000155   47471 command_runner.go:130] > # ]
	I0828 17:48:32.000160   47471 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0828 17:48:32.000168   47471 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0828 17:48:32.000174   47471 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0828 17:48:32.000181   47471 command_runner.go:130] > # Each entry in the table should follow the format:
	I0828 17:48:32.000184   47471 command_runner.go:130] > #
	I0828 17:48:32.000188   47471 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0828 17:48:32.000196   47471 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0828 17:48:32.000237   47471 command_runner.go:130] > # runtime_type = "oci"
	I0828 17:48:32.000244   47471 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0828 17:48:32.000253   47471 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0828 17:48:32.000259   47471 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0828 17:48:32.000264   47471 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0828 17:48:32.000270   47471 command_runner.go:130] > # monitor_env = []
	I0828 17:48:32.000275   47471 command_runner.go:130] > # privileged_without_host_devices = false
	I0828 17:48:32.000279   47471 command_runner.go:130] > # allowed_annotations = []
	I0828 17:48:32.000286   47471 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0828 17:48:32.000290   47471 command_runner.go:130] > # Where:
	I0828 17:48:32.000295   47471 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0828 17:48:32.000301   47471 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0828 17:48:32.000307   47471 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0828 17:48:32.000315   47471 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0828 17:48:32.000318   47471 command_runner.go:130] > #   in $PATH.
	I0828 17:48:32.000327   47471 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0828 17:48:32.000333   47471 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0828 17:48:32.000341   47471 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0828 17:48:32.000345   47471 command_runner.go:130] > #   state.
	I0828 17:48:32.000351   47471 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0828 17:48:32.000358   47471 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0828 17:48:32.000364   47471 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0828 17:48:32.000369   47471 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0828 17:48:32.000377   47471 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0828 17:48:32.000383   47471 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0828 17:48:32.000391   47471 command_runner.go:130] > #   The currently recognized values are:
	I0828 17:48:32.000398   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0828 17:48:32.000407   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0828 17:48:32.000413   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0828 17:48:32.000419   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0828 17:48:32.000426   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0828 17:48:32.000435   47471 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0828 17:48:32.000441   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0828 17:48:32.000449   47471 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0828 17:48:32.000455   47471 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0828 17:48:32.000462   47471 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0828 17:48:32.000467   47471 command_runner.go:130] > #   deprecated option "conmon".
	I0828 17:48:32.000476   47471 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0828 17:48:32.000486   47471 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0828 17:48:32.000495   47471 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0828 17:48:32.000499   47471 command_runner.go:130] > #   should be moved to the container's cgroup
	I0828 17:48:32.000507   47471 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0828 17:48:32.000512   47471 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0828 17:48:32.000520   47471 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0828 17:48:32.000525   47471 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0828 17:48:32.000530   47471 command_runner.go:130] > #
	I0828 17:48:32.000535   47471 command_runner.go:130] > # Using the seccomp notifier feature:
	I0828 17:48:32.000538   47471 command_runner.go:130] > #
	I0828 17:48:32.000543   47471 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0828 17:48:32.000551   47471 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0828 17:48:32.000554   47471 command_runner.go:130] > #
	I0828 17:48:32.000562   47471 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0828 17:48:32.000570   47471 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0828 17:48:32.000573   47471 command_runner.go:130] > #
	I0828 17:48:32.000582   47471 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0828 17:48:32.000588   47471 command_runner.go:130] > # feature.
	I0828 17:48:32.000591   47471 command_runner.go:130] > #
	I0828 17:48:32.000596   47471 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0828 17:48:32.000604   47471 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0828 17:48:32.000610   47471 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0828 17:48:32.000620   47471 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0828 17:48:32.000627   47471 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0828 17:48:32.000632   47471 command_runner.go:130] > #
	I0828 17:48:32.000637   47471 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0828 17:48:32.000645   47471 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0828 17:48:32.000648   47471 command_runner.go:130] > #
	I0828 17:48:32.000655   47471 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0828 17:48:32.000662   47471 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0828 17:48:32.000666   47471 command_runner.go:130] > #
	I0828 17:48:32.000672   47471 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0828 17:48:32.000679   47471 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0828 17:48:32.000683   47471 command_runner.go:130] > # limitation.
	I0828 17:48:32.000688   47471 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0828 17:48:32.000694   47471 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0828 17:48:32.000704   47471 command_runner.go:130] > runtime_type = "oci"
	I0828 17:48:32.000711   47471 command_runner.go:130] > runtime_root = "/run/runc"
	I0828 17:48:32.000715   47471 command_runner.go:130] > runtime_config_path = ""
	I0828 17:48:32.000719   47471 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0828 17:48:32.000725   47471 command_runner.go:130] > monitor_cgroup = "pod"
	I0828 17:48:32.000729   47471 command_runner.go:130] > monitor_exec_cgroup = ""
	I0828 17:48:32.000733   47471 command_runner.go:130] > monitor_env = [
	I0828 17:48:32.000738   47471 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0828 17:48:32.000743   47471 command_runner.go:130] > ]
	I0828 17:48:32.000748   47471 command_runner.go:130] > privileged_without_host_devices = false
	I0828 17:48:32.000756   47471 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0828 17:48:32.000761   47471 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0828 17:48:32.000769   47471 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0828 17:48:32.000776   47471 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0828 17:48:32.000785   47471 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0828 17:48:32.000790   47471 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0828 17:48:32.000800   47471 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0828 17:48:32.000808   47471 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0828 17:48:32.000814   47471 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0828 17:48:32.000821   47471 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0828 17:48:32.000824   47471 command_runner.go:130] > # Example:
	I0828 17:48:32.000828   47471 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0828 17:48:32.000833   47471 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0828 17:48:32.000837   47471 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0828 17:48:32.000843   47471 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0828 17:48:32.000847   47471 command_runner.go:130] > # cpuset = 0
	I0828 17:48:32.000852   47471 command_runner.go:130] > # cpushares = "0-1"
	I0828 17:48:32.000856   47471 command_runner.go:130] > # Where:
	I0828 17:48:32.000860   47471 command_runner.go:130] > # The workload name is workload-type.
	I0828 17:48:32.000866   47471 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0828 17:48:32.000871   47471 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0828 17:48:32.000877   47471 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0828 17:48:32.000884   47471 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0828 17:48:32.000889   47471 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0828 17:48:32.000894   47471 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0828 17:48:32.000899   47471 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0828 17:48:32.000907   47471 command_runner.go:130] > # Default value is set to true
	I0828 17:48:32.000912   47471 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0828 17:48:32.000917   47471 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0828 17:48:32.000921   47471 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0828 17:48:32.000925   47471 command_runner.go:130] > # Default value is set to 'false'
	I0828 17:48:32.000928   47471 command_runner.go:130] > # disable_hostport_mapping = false
	I0828 17:48:32.000934   47471 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0828 17:48:32.000937   47471 command_runner.go:130] > #
	I0828 17:48:32.000942   47471 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0828 17:48:32.000947   47471 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0828 17:48:32.000953   47471 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0828 17:48:32.000962   47471 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0828 17:48:32.000966   47471 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0828 17:48:32.000970   47471 command_runner.go:130] > [crio.image]
	I0828 17:48:32.000976   47471 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0828 17:48:32.000983   47471 command_runner.go:130] > # default_transport = "docker://"
	I0828 17:48:32.000989   47471 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0828 17:48:32.000997   47471 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0828 17:48:32.001001   47471 command_runner.go:130] > # global_auth_file = ""
	I0828 17:48:32.001006   47471 command_runner.go:130] > # The image used to instantiate infra containers.
	I0828 17:48:32.001012   47471 command_runner.go:130] > # This option supports live configuration reload.
	I0828 17:48:32.001019   47471 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0828 17:48:32.001025   47471 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0828 17:48:32.001033   47471 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0828 17:48:32.001037   47471 command_runner.go:130] > # This option supports live configuration reload.
	I0828 17:48:32.001045   47471 command_runner.go:130] > # pause_image_auth_file = ""
	I0828 17:48:32.001051   47471 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0828 17:48:32.001056   47471 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0828 17:48:32.001064   47471 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0828 17:48:32.001070   47471 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0828 17:48:32.001076   47471 command_runner.go:130] > # pause_command = "/pause"
	I0828 17:48:32.001082   47471 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0828 17:48:32.001089   47471 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0828 17:48:32.001095   47471 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0828 17:48:32.001103   47471 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0828 17:48:32.001109   47471 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0828 17:48:32.001121   47471 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0828 17:48:32.001128   47471 command_runner.go:130] > # pinned_images = [
	I0828 17:48:32.001131   47471 command_runner.go:130] > # ]
	I0828 17:48:32.001137   47471 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0828 17:48:32.001143   47471 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0828 17:48:32.001149   47471 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0828 17:48:32.001158   47471 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0828 17:48:32.001163   47471 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0828 17:48:32.001169   47471 command_runner.go:130] > # signature_policy = ""
	I0828 17:48:32.001174   47471 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0828 17:48:32.001183   47471 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0828 17:48:32.001191   47471 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0828 17:48:32.001198   47471 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0828 17:48:32.001205   47471 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0828 17:48:32.001210   47471 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0828 17:48:32.001218   47471 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0828 17:48:32.001224   47471 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0828 17:48:32.001228   47471 command_runner.go:130] > # changing them here.
	I0828 17:48:32.001232   47471 command_runner.go:130] > # insecure_registries = [
	I0828 17:48:32.001235   47471 command_runner.go:130] > # ]
	I0828 17:48:32.001241   47471 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0828 17:48:32.001248   47471 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0828 17:48:32.001252   47471 command_runner.go:130] > # image_volumes = "mkdir"
	I0828 17:48:32.001259   47471 command_runner.go:130] > # Temporary directory to use for storing big files
	I0828 17:48:32.001263   47471 command_runner.go:130] > # big_files_temporary_dir = ""
	I0828 17:48:32.001272   47471 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0828 17:48:32.001277   47471 command_runner.go:130] > # CNI plugins.
	I0828 17:48:32.001281   47471 command_runner.go:130] > [crio.network]
	I0828 17:48:32.001286   47471 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0828 17:48:32.001294   47471 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0828 17:48:32.001299   47471 command_runner.go:130] > # cni_default_network = ""
	I0828 17:48:32.001306   47471 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0828 17:48:32.001310   47471 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0828 17:48:32.001316   47471 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0828 17:48:32.001322   47471 command_runner.go:130] > # plugin_dirs = [
	I0828 17:48:32.001326   47471 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0828 17:48:32.001337   47471 command_runner.go:130] > # ]
	I0828 17:48:32.001345   47471 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0828 17:48:32.001349   47471 command_runner.go:130] > [crio.metrics]
	I0828 17:48:32.001353   47471 command_runner.go:130] > # Globally enable or disable metrics support.
	I0828 17:48:32.001359   47471 command_runner.go:130] > enable_metrics = true
	I0828 17:48:32.001363   47471 command_runner.go:130] > # Specify enabled metrics collectors.
	I0828 17:48:32.001370   47471 command_runner.go:130] > # Per default all metrics are enabled.
	I0828 17:48:32.001375   47471 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0828 17:48:32.001384   47471 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0828 17:48:32.001389   47471 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0828 17:48:32.001393   47471 command_runner.go:130] > # metrics_collectors = [
	I0828 17:48:32.001397   47471 command_runner.go:130] > # 	"operations",
	I0828 17:48:32.001401   47471 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0828 17:48:32.001405   47471 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0828 17:48:32.001409   47471 command_runner.go:130] > # 	"operations_errors",
	I0828 17:48:32.001413   47471 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0828 17:48:32.001417   47471 command_runner.go:130] > # 	"image_pulls_by_name",
	I0828 17:48:32.001421   47471 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0828 17:48:32.001425   47471 command_runner.go:130] > # 	"image_pulls_failures",
	I0828 17:48:32.001429   47471 command_runner.go:130] > # 	"image_pulls_successes",
	I0828 17:48:32.001433   47471 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0828 17:48:32.001439   47471 command_runner.go:130] > # 	"image_layer_reuse",
	I0828 17:48:32.001445   47471 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0828 17:48:32.001452   47471 command_runner.go:130] > # 	"containers_oom_total",
	I0828 17:48:32.001455   47471 command_runner.go:130] > # 	"containers_oom",
	I0828 17:48:32.001459   47471 command_runner.go:130] > # 	"processes_defunct",
	I0828 17:48:32.001463   47471 command_runner.go:130] > # 	"operations_total",
	I0828 17:48:32.001468   47471 command_runner.go:130] > # 	"operations_latency_seconds",
	I0828 17:48:32.001472   47471 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0828 17:48:32.001478   47471 command_runner.go:130] > # 	"operations_errors_total",
	I0828 17:48:32.001482   47471 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0828 17:48:32.001488   47471 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0828 17:48:32.001493   47471 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0828 17:48:32.001499   47471 command_runner.go:130] > # 	"image_pulls_success_total",
	I0828 17:48:32.001505   47471 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0828 17:48:32.001513   47471 command_runner.go:130] > # 	"containers_oom_count_total",
	I0828 17:48:32.001522   47471 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0828 17:48:32.001528   47471 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0828 17:48:32.001531   47471 command_runner.go:130] > # ]
	I0828 17:48:32.001539   47471 command_runner.go:130] > # The port on which the metrics server will listen.
	I0828 17:48:32.001543   47471 command_runner.go:130] > # metrics_port = 9090
	I0828 17:48:32.001550   47471 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0828 17:48:32.001554   47471 command_runner.go:130] > # metrics_socket = ""
	I0828 17:48:32.001559   47471 command_runner.go:130] > # The certificate for the secure metrics server.
	I0828 17:48:32.001565   47471 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0828 17:48:32.001573   47471 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0828 17:48:32.001582   47471 command_runner.go:130] > # certificate on any modification event.
	I0828 17:48:32.001588   47471 command_runner.go:130] > # metrics_cert = ""
	I0828 17:48:32.001593   47471 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0828 17:48:32.001599   47471 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0828 17:48:32.001603   47471 command_runner.go:130] > # metrics_key = ""
	I0828 17:48:32.001610   47471 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0828 17:48:32.001614   47471 command_runner.go:130] > [crio.tracing]
	I0828 17:48:32.001621   47471 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0828 17:48:32.001624   47471 command_runner.go:130] > # enable_tracing = false
	I0828 17:48:32.001629   47471 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0828 17:48:32.001636   47471 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0828 17:48:32.001642   47471 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0828 17:48:32.001648   47471 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0828 17:48:32.001652   47471 command_runner.go:130] > # CRI-O NRI configuration.
	I0828 17:48:32.001658   47471 command_runner.go:130] > [crio.nri]
	I0828 17:48:32.001662   47471 command_runner.go:130] > # Globally enable or disable NRI.
	I0828 17:48:32.001666   47471 command_runner.go:130] > # enable_nri = false
	I0828 17:48:32.001670   47471 command_runner.go:130] > # NRI socket to listen on.
	I0828 17:48:32.001674   47471 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0828 17:48:32.001678   47471 command_runner.go:130] > # NRI plugin directory to use.
	I0828 17:48:32.001683   47471 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0828 17:48:32.001689   47471 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0828 17:48:32.001694   47471 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0828 17:48:32.001702   47471 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0828 17:48:32.001706   47471 command_runner.go:130] > # nri_disable_connections = false
	I0828 17:48:32.001714   47471 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0828 17:48:32.001725   47471 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0828 17:48:32.001732   47471 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0828 17:48:32.001736   47471 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0828 17:48:32.001742   47471 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0828 17:48:32.001746   47471 command_runner.go:130] > [crio.stats]
	I0828 17:48:32.001753   47471 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0828 17:48:32.001761   47471 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0828 17:48:32.001766   47471 command_runner.go:130] > # stats_collection_period = 0
	I0828 17:48:32.001896   47471 cni.go:84] Creating CNI manager for ""
	I0828 17:48:32.001907   47471 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0828 17:48:32.001915   47471 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 17:48:32.001934   47471 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-168922 NodeName:multinode-168922 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 17:48:32.002061   47471 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-168922"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 17:48:32.002142   47471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 17:48:32.011690   47471 command_runner.go:130] > kubeadm
	I0828 17:48:32.011714   47471 command_runner.go:130] > kubectl
	I0828 17:48:32.011720   47471 command_runner.go:130] > kubelet
	I0828 17:48:32.011794   47471 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 17:48:32.011864   47471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 17:48:32.020720   47471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0828 17:48:32.036520   47471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:48:32.052086   47471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0828 17:48:32.067831   47471 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0828 17:48:32.071989   47471 command_runner.go:130] > 192.168.39.123	control-plane.minikube.internal
	I0828 17:48:32.072060   47471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:48:32.209210   47471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:48:32.223241   47471 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922 for IP: 192.168.39.123
	I0828 17:48:32.223271   47471 certs.go:194] generating shared ca certs ...
	I0828 17:48:32.223293   47471 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:48:32.223490   47471 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 17:48:32.223561   47471 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 17:48:32.223579   47471 certs.go:256] generating profile certs ...
	I0828 17:48:32.223687   47471 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/client.key
	I0828 17:48:32.223755   47471 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/apiserver.key.b3d25175
	I0828 17:48:32.223791   47471 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/proxy-client.key
	I0828 17:48:32.223807   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0828 17:48:32.223821   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0828 17:48:32.223833   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0828 17:48:32.223846   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0828 17:48:32.223860   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0828 17:48:32.223872   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0828 17:48:32.223885   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0828 17:48:32.223896   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0828 17:48:32.223944   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 17:48:32.223969   47471 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 17:48:32.223978   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:48:32.224000   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 17:48:32.224022   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:48:32.224053   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 17:48:32.224089   47471 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:48:32.224114   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:32.224127   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem -> /usr/share/ca-certificates/17528.pem
	I0828 17:48:32.224138   47471 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> /usr/share/ca-certificates/175282.pem
	I0828 17:48:32.224713   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:48:32.248038   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 17:48:32.271219   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:48:32.293508   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:48:32.316064   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 17:48:32.338817   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 17:48:32.362869   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:48:32.386812   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/multinode-168922/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 17:48:32.409780   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:48:32.432046   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 17:48:32.496245   47471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 17:48:32.525317   47471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 17:48:32.549508   47471 ssh_runner.go:195] Run: openssl version
	I0828 17:48:32.555439   47471 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0828 17:48:32.555593   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:48:32.566985   47471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:32.573703   47471 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:32.573873   47471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:32.573935   47471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:48:32.579344   47471 command_runner.go:130] > b5213941
	I0828 17:48:32.579611   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:48:32.592987   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 17:48:32.604309   47471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 17:48:32.608808   47471 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:48:32.608837   47471 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:48:32.608884   47471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 17:48:32.615725   47471 command_runner.go:130] > 51391683
	I0828 17:48:32.615793   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 17:48:32.625704   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 17:48:32.636677   47471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 17:48:32.641075   47471 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:48:32.641113   47471 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:48:32.641160   47471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 17:48:32.646670   47471 command_runner.go:130] > 3ec20f2e
	I0828 17:48:32.646741   47471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 17:48:32.656472   47471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:48:32.660856   47471 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:48:32.660881   47471 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0828 17:48:32.660890   47471 command_runner.go:130] > Device: 253,1	Inode: 9432598     Links: 1
	I0828 17:48:32.660900   47471 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0828 17:48:32.660910   47471 command_runner.go:130] > Access: 2024-08-28 17:41:50.285687991 +0000
	I0828 17:48:32.660917   47471 command_runner.go:130] > Modify: 2024-08-28 17:41:50.285687991 +0000
	I0828 17:48:32.660924   47471 command_runner.go:130] > Change: 2024-08-28 17:41:50.285687991 +0000
	I0828 17:48:32.660932   47471 command_runner.go:130] >  Birth: 2024-08-28 17:41:50.285687991 +0000
	I0828 17:48:32.661003   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 17:48:32.666307   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.666360   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 17:48:32.671887   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.671958   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 17:48:32.677729   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.677873   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 17:48:32.682961   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.683126   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 17:48:32.688268   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.688355   47471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 17:48:32.693651   47471 command_runner.go:130] > Certificate will not expire
	I0828 17:48:32.693719   47471 kubeadm.go:392] StartCluster: {Name:multinode-168922 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-168922 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.88 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.13 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:48:32.693865   47471 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 17:48:32.693931   47471 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 17:48:32.728337   47471 command_runner.go:130] > 1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf
	I0828 17:48:32.728359   47471 command_runner.go:130] > 667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3
	I0828 17:48:32.728365   47471 command_runner.go:130] > 5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9
	I0828 17:48:32.728371   47471 command_runner.go:130] > 9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de
	I0828 17:48:32.728383   47471 command_runner.go:130] > 1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3
	I0828 17:48:32.728397   47471 command_runner.go:130] > c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044
	I0828 17:48:32.728405   47471 command_runner.go:130] > 55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5
	I0828 17:48:32.728418   47471 command_runner.go:130] > 6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb
	I0828 17:48:32.729738   47471 cri.go:89] found id: "1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf"
	I0828 17:48:32.729758   47471 cri.go:89] found id: "667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3"
	I0828 17:48:32.729767   47471 cri.go:89] found id: "5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9"
	I0828 17:48:32.729771   47471 cri.go:89] found id: "9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de"
	I0828 17:48:32.729774   47471 cri.go:89] found id: "1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3"
	I0828 17:48:32.729779   47471 cri.go:89] found id: "c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044"
	I0828 17:48:32.729783   47471 cri.go:89] found id: "55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5"
	I0828 17:48:32.729786   47471 cri.go:89] found id: "6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb"
	I0828 17:48:32.729791   47471 cri.go:89] found id: ""
	I0828 17:48:32.729835   47471 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.383261556Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867563383235361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5115f470-85c3-4986-8934-ee133ef26584 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.383719345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01829704-b896-40ba-88b5-ce453dc16d57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.383777185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01829704-b896-40ba-88b5-ce453dc16d57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.384181878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e67573c6cd5d637fe94a33b97bb5ac21820140e1f0b63b3ed11455b2a19604c,PodSandboxId:5d84015002208f99c39a7c17778579437c68d8522ef240799aa84dbabba44e41,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724867353702941795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4a9751041545243372fe3c7ebd584d3d631028f8c8860b143eb68ebb8b8c88,PodSandboxId:578cdc4ec819665c7cf3fa3a71cace3a3d54592c0f046915df67267218819547,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724867320140817245,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbbd014fafc407cf48be9bd50c7cfba0763c7263fee3707ee927c58bf111dda,PodSandboxId:7f82180a771a310e545e0ebed9cd225280a51384e24481f37d3182961eeb46c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724867320045823411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b546b15c13c29ae1cc0717223026cba99879f710b63603c0b5954dfb352e313c,PodSandboxId:b7f09ea8289c274b91b9ad69d06fa731892e7994b27a23b30e7f330b535dc7e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724867319995981306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095cbff545696a84485f28a9daa5523789e557801dc99a7d7ee209eca3982e89,PodSandboxId:67b097f1d50aebe345a8c383cc62bed9ca0f97b3d7c5dd9481a20536f55ba96f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867319940827356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e895bc0bcc591080b27929ee9cd16e8fd286eb028308b99edd01abd5c03e87,PodSandboxId:bb260fa4b64b86edc784df16704b3c93d12ff1e2e7ba1636d9527d4509e328c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724867315164648368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba4190ea9062bbb5115b6a53799d288d83f71c7eac29e774706cc7b278f35e,PodSandboxId:319aae90bad564c48eef1253f22a602062115ae410a50d5f991b1ece84db9ad6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724867315118018307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef2cad6516baabe33a0fbd81b95286afe49a3b207930d675cce0454891a6b31,PodSandboxId:930327a1eae5c6d16d0d42b16a5372f8ff5e6fa5f4e820928c9546cc200d5bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724867315046288713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12d2f20d93694ecb3fd1d1fef2816498d5b36464aee32903123e2862ac647f4,PodSandboxId:231178480a4f873c1d1f63a93649ef0c685127492961aa20759c3ea6e9cd61e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724867314916024483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03efcbef61f9145058aa8094acffb40564ed8dbca79ad3e6505633e939bd7b09,PodSandboxId:8b780384856ae1b3e600b92aac5c41b99c7aa0c769de8301054474a7b3ad232d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724866993144582563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf,PodSandboxId:dc51e5ba8b17b66f40d96dca725d789d2e7f3b53c63004679ea671ccd4528abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724866940418320287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3,PodSandboxId:7d12f7c63c7a5739bf7fb6dd706f7888eb89a4a0438f1bf76da92e3fa550cd6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724866940335367579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9,PodSandboxId:ff4f9df0138d7647619037b98ae86a6d697eb6fad6d55440803000848d088216,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724866928585106349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de,PodSandboxId:cff3c7deaa1f1858ee3a17a5edc84f10acbe148a9914b9c439edd9c01a602457,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724866925145012674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3,PodSandboxId:e95b10cb5007f33008ff0fd6c8201568a1c92ebcb45284026e584fd89b397515,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724866914321523995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5,PodSandboxId:0145b4a0f58cada615abb0158d807d327f87ce9d0a7067a5bb061908fddb8842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724866914238078614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044,PodSandboxId:0ce88a035698f0f15f8863fe47e7e1db3047e3bba21b1dc9524e66e4736f4473,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724866914272241252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb,PodSandboxId:3aba877e633d2199e50d3ae87f488c1e9ee7297b8d31579292f038e504b76862,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724866914201197437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01829704-b896-40ba-88b5-ce453dc16d57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.423645970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=553f11b2-30bc-4d25-aa49-1c2f14f6dcd7 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.423717563Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=553f11b2-30bc-4d25-aa49-1c2f14f6dcd7 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.424969064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6feeaba3-c18a-49f3-80c8-ccddb6149bd8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.425498838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867563425417303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6feeaba3-c18a-49f3-80c8-ccddb6149bd8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.426309078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da5a07cd-9b29-4272-ad03-bbdeee6dd39d name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.426364997Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da5a07cd-9b29-4272-ad03-bbdeee6dd39d name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.426903500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e67573c6cd5d637fe94a33b97bb5ac21820140e1f0b63b3ed11455b2a19604c,PodSandboxId:5d84015002208f99c39a7c17778579437c68d8522ef240799aa84dbabba44e41,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724867353702941795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4a9751041545243372fe3c7ebd584d3d631028f8c8860b143eb68ebb8b8c88,PodSandboxId:578cdc4ec819665c7cf3fa3a71cace3a3d54592c0f046915df67267218819547,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724867320140817245,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbbd014fafc407cf48be9bd50c7cfba0763c7263fee3707ee927c58bf111dda,PodSandboxId:7f82180a771a310e545e0ebed9cd225280a51384e24481f37d3182961eeb46c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724867320045823411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b546b15c13c29ae1cc0717223026cba99879f710b63603c0b5954dfb352e313c,PodSandboxId:b7f09ea8289c274b91b9ad69d06fa731892e7994b27a23b30e7f330b535dc7e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724867319995981306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095cbff545696a84485f28a9daa5523789e557801dc99a7d7ee209eca3982e89,PodSandboxId:67b097f1d50aebe345a8c383cc62bed9ca0f97b3d7c5dd9481a20536f55ba96f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867319940827356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e895bc0bcc591080b27929ee9cd16e8fd286eb028308b99edd01abd5c03e87,PodSandboxId:bb260fa4b64b86edc784df16704b3c93d12ff1e2e7ba1636d9527d4509e328c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724867315164648368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba4190ea9062bbb5115b6a53799d288d83f71c7eac29e774706cc7b278f35e,PodSandboxId:319aae90bad564c48eef1253f22a602062115ae410a50d5f991b1ece84db9ad6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724867315118018307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef2cad6516baabe33a0fbd81b95286afe49a3b207930d675cce0454891a6b31,PodSandboxId:930327a1eae5c6d16d0d42b16a5372f8ff5e6fa5f4e820928c9546cc200d5bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724867315046288713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12d2f20d93694ecb3fd1d1fef2816498d5b36464aee32903123e2862ac647f4,PodSandboxId:231178480a4f873c1d1f63a93649ef0c685127492961aa20759c3ea6e9cd61e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724867314916024483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03efcbef61f9145058aa8094acffb40564ed8dbca79ad3e6505633e939bd7b09,PodSandboxId:8b780384856ae1b3e600b92aac5c41b99c7aa0c769de8301054474a7b3ad232d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724866993144582563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf,PodSandboxId:dc51e5ba8b17b66f40d96dca725d789d2e7f3b53c63004679ea671ccd4528abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724866940418320287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3,PodSandboxId:7d12f7c63c7a5739bf7fb6dd706f7888eb89a4a0438f1bf76da92e3fa550cd6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724866940335367579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9,PodSandboxId:ff4f9df0138d7647619037b98ae86a6d697eb6fad6d55440803000848d088216,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724866928585106349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de,PodSandboxId:cff3c7deaa1f1858ee3a17a5edc84f10acbe148a9914b9c439edd9c01a602457,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724866925145012674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3,PodSandboxId:e95b10cb5007f33008ff0fd6c8201568a1c92ebcb45284026e584fd89b397515,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724866914321523995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5,PodSandboxId:0145b4a0f58cada615abb0158d807d327f87ce9d0a7067a5bb061908fddb8842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724866914238078614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044,PodSandboxId:0ce88a035698f0f15f8863fe47e7e1db3047e3bba21b1dc9524e66e4736f4473,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724866914272241252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb,PodSandboxId:3aba877e633d2199e50d3ae87f488c1e9ee7297b8d31579292f038e504b76862,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724866914201197437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da5a07cd-9b29-4272-ad03-bbdeee6dd39d name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.464114529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=453513fb-2b6e-4df9-9043-f29903eb8c31 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.464188383Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=453513fb-2b6e-4df9-9043-f29903eb8c31 name=/runtime.v1.RuntimeService/Version
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.465252924Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e790c15-aa44-40bc-bd97-af82c7ccbc22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.465719085Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867563465694015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e790c15-aa44-40bc-bd97-af82c7ccbc22 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.466194014Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b545dab4-3082-41fe-8474-22c630058c52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.466248821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b545dab4-3082-41fe-8474-22c630058c52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.470304723Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e67573c6cd5d637fe94a33b97bb5ac21820140e1f0b63b3ed11455b2a19604c,PodSandboxId:5d84015002208f99c39a7c17778579437c68d8522ef240799aa84dbabba44e41,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724867353702941795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4a9751041545243372fe3c7ebd584d3d631028f8c8860b143eb68ebb8b8c88,PodSandboxId:578cdc4ec819665c7cf3fa3a71cace3a3d54592c0f046915df67267218819547,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724867320140817245,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbbd014fafc407cf48be9bd50c7cfba0763c7263fee3707ee927c58bf111dda,PodSandboxId:7f82180a771a310e545e0ebed9cd225280a51384e24481f37d3182961eeb46c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724867320045823411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b546b15c13c29ae1cc0717223026cba99879f710b63603c0b5954dfb352e313c,PodSandboxId:b7f09ea8289c274b91b9ad69d06fa731892e7994b27a23b30e7f330b535dc7e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724867319995981306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095cbff545696a84485f28a9daa5523789e557801dc99a7d7ee209eca3982e89,PodSandboxId:67b097f1d50aebe345a8c383cc62bed9ca0f97b3d7c5dd9481a20536f55ba96f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867319940827356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e895bc0bcc591080b27929ee9cd16e8fd286eb028308b99edd01abd5c03e87,PodSandboxId:bb260fa4b64b86edc784df16704b3c93d12ff1e2e7ba1636d9527d4509e328c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724867315164648368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba4190ea9062bbb5115b6a53799d288d83f71c7eac29e774706cc7b278f35e,PodSandboxId:319aae90bad564c48eef1253f22a602062115ae410a50d5f991b1ece84db9ad6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724867315118018307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef2cad6516baabe33a0fbd81b95286afe49a3b207930d675cce0454891a6b31,PodSandboxId:930327a1eae5c6d16d0d42b16a5372f8ff5e6fa5f4e820928c9546cc200d5bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724867315046288713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12d2f20d93694ecb3fd1d1fef2816498d5b36464aee32903123e2862ac647f4,PodSandboxId:231178480a4f873c1d1f63a93649ef0c685127492961aa20759c3ea6e9cd61e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724867314916024483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03efcbef61f9145058aa8094acffb40564ed8dbca79ad3e6505633e939bd7b09,PodSandboxId:8b780384856ae1b3e600b92aac5c41b99c7aa0c769de8301054474a7b3ad232d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724866993144582563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf,PodSandboxId:dc51e5ba8b17b66f40d96dca725d789d2e7f3b53c63004679ea671ccd4528abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724866940418320287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3,PodSandboxId:7d12f7c63c7a5739bf7fb6dd706f7888eb89a4a0438f1bf76da92e3fa550cd6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724866940335367579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9,PodSandboxId:ff4f9df0138d7647619037b98ae86a6d697eb6fad6d55440803000848d088216,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724866928585106349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de,PodSandboxId:cff3c7deaa1f1858ee3a17a5edc84f10acbe148a9914b9c439edd9c01a602457,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724866925145012674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3,PodSandboxId:e95b10cb5007f33008ff0fd6c8201568a1c92ebcb45284026e584fd89b397515,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724866914321523995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5,PodSandboxId:0145b4a0f58cada615abb0158d807d327f87ce9d0a7067a5bb061908fddb8842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724866914238078614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044,PodSandboxId:0ce88a035698f0f15f8863fe47e7e1db3047e3bba21b1dc9524e66e4736f4473,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724866914272241252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb,PodSandboxId:3aba877e633d2199e50d3ae87f488c1e9ee7297b8d31579292f038e504b76862,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724866914201197437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b545dab4-3082-41fe-8474-22c630058c52 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.510873712Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c4b37f6-1e8c-4bbf-8842-b72fa9a9112a name=/runtime.v1.RuntimeService/Version
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.510951171Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c4b37f6-1e8c-4bbf-8842-b72fa9a9112a name=/runtime.v1.RuntimeService/Version
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.512772253Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7d55fc1-114b-446e-9cf3-4ad31b4b20ad name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.513213646Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867563513189495,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7d55fc1-114b-446e-9cf3-4ad31b4b20ad name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.513771571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=34bb0024-d3e8-49f9-a4e3-68631c2dad2c name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.513828650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=34bb0024-d3e8-49f9-a4e3-68631c2dad2c name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 17:52:43 multinode-168922 crio[2742]: time="2024-08-28 17:52:43.514211403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3e67573c6cd5d637fe94a33b97bb5ac21820140e1f0b63b3ed11455b2a19604c,PodSandboxId:5d84015002208f99c39a7c17778579437c68d8522ef240799aa84dbabba44e41,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1724867353702941795,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4a9751041545243372fe3c7ebd584d3d631028f8c8860b143eb68ebb8b8c88,PodSandboxId:578cdc4ec819665c7cf3fa3a71cace3a3d54592c0f046915df67267218819547,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1724867320140817245,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dbbd014fafc407cf48be9bd50c7cfba0763c7263fee3707ee927c58bf111dda,PodSandboxId:7f82180a771a310e545e0ebed9cd225280a51384e24481f37d3182961eeb46c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724867320045823411,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b546b15c13c29ae1cc0717223026cba99879f710b63603c0b5954dfb352e313c,PodSandboxId:b7f09ea8289c274b91b9ad69d06fa731892e7994b27a23b30e7f330b535dc7e9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724867319995981306,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]
string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:095cbff545696a84485f28a9daa5523789e557801dc99a7d7ee209eca3982e89,PodSandboxId:67b097f1d50aebe345a8c383cc62bed9ca0f97b3d7c5dd9481a20536f55ba96f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867319940827356,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kub
ernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20e895bc0bcc591080b27929ee9cd16e8fd286eb028308b99edd01abd5c03e87,PodSandboxId:bb260fa4b64b86edc784df16704b3c93d12ff1e2e7ba1636d9527d4509e328c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724867315164648368,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddba4190ea9062bbb5115b6a53799d288d83f71c7eac29e774706cc7b278f35e,PodSandboxId:319aae90bad564c48eef1253f22a602062115ae410a50d5f991b1ece84db9ad6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724867315118018307,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kube
rnetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ef2cad6516baabe33a0fbd81b95286afe49a3b207930d675cce0454891a6b31,PodSandboxId:930327a1eae5c6d16d0d42b16a5372f8ff5e6fa5f4e820928c9546cc200d5bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724867315046288713,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f12d2f20d93694ecb3fd1d1fef2816498d5b36464aee32903123e2862ac647f4,PodSandboxId:231178480a4f873c1d1f63a93649ef0c685127492961aa20759c3ea6e9cd61e3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724867314916024483,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03efcbef61f9145058aa8094acffb40564ed8dbca79ad3e6505633e939bd7b09,PodSandboxId:8b780384856ae1b3e600b92aac5c41b99c7aa0c769de8301054474a7b3ad232d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1724866993144582563,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-w6glt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f22e31f9-6472-4046-a72c-6966d9134733,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf,PodSandboxId:dc51e5ba8b17b66f40d96dca725d789d2e7f3b53c63004679ea671ccd4528abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724866940418320287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-6r6bx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5f43f2f-2c19-432a-99b8-983ab55c60f0,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:667b50a5c0e5805444b5d2f4172003e94c2411412f4ee19d5990d05ecfe110d3,PodSandboxId:7d12f7c63c7a5739bf7fb6dd706f7888eb89a4a0438f1bf76da92e3fa550cd6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724866940335367579,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 19f5fa32-4836-4094-ad7d-bde5c0fde669,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9,PodSandboxId:ff4f9df0138d7647619037b98ae86a6d697eb6fad6d55440803000848d088216,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1724866928585106349,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-x4zf2,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: 9c3ffd02-c2c7-471b-b3ca-e3b57725d65b,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de,PodSandboxId:cff3c7deaa1f1858ee3a17a5edc84f10acbe148a9914b9c439edd9c01a602457,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724866925145012674,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-476qk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 15ffb98c-113e-4085-8bc3-a0e67a97cdea,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3,PodSandboxId:e95b10cb5007f33008ff0fd6c8201568a1c92ebcb45284026e584fd89b397515,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724866914321523995,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e
8ee75bb0958266387e37bd73805090,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5,PodSandboxId:0145b4a0f58cada615abb0158d807d327f87ce9d0a7067a5bb061908fddb8842,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724866914238078614,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 8074d83089f0665a836945389001ee6d,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044,PodSandboxId:0ce88a035698f0f15f8863fe47e7e1db3047e3bba21b1dc9524e66e4736f4473,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724866914272241252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3acc6a958022ff9fdb1ed85aeb84db6f,},
Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb,PodSandboxId:3aba877e633d2199e50d3ae87f488c1e9ee7297b8d31579292f038e504b76862,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724866914201197437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-168922,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99e9207aaab4ce4bb309c0e52ab05dba,},Annotations:map
[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=34bb0024-d3e8-49f9-a4e3-68631c2dad2c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3e67573c6cd5d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   5d84015002208       busybox-7dff88458-w6glt
	0c4a975104154       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   578cdc4ec8196       kindnet-x4zf2
	0dbbd014fafc4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   7f82180a771a3       coredns-6f6b679f8f-6r6bx
	b546b15c13c29       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   b7f09ea8289c2       kube-proxy-476qk
	095cbff545696       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   67b097f1d50ae       storage-provisioner
	20e895bc0bcc5       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   bb260fa4b64b8       kube-scheduler-multinode-168922
	ddba4190ea906       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   319aae90bad56       kube-controller-manager-multinode-168922
	2ef2cad6516ba       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   930327a1eae5c       etcd-multinode-168922
	f12d2f20d9369       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   231178480a4f8       kube-apiserver-multinode-168922
	03efcbef61f91       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   8b780384856ae       busybox-7dff88458-w6glt
	1d5f30bd1d002       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   dc51e5ba8b17b       coredns-6f6b679f8f-6r6bx
	667b50a5c0e58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   7d12f7c63c7a5       storage-provisioner
	5cb61f5b3dfed       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   ff4f9df0138d7       kindnet-x4zf2
	9e3d7d32be036       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   cff3c7deaa1f1       kube-proxy-476qk
	1e68bf808c05d       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   e95b10cb5007f       kube-scheduler-multinode-168922
	c8e59f37886db       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   0ce88a035698f       etcd-multinode-168922
	55546ecd55f3c       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   0145b4a0f58ca       kube-controller-manager-multinode-168922
	6ca1265a851f1       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   3aba877e633d2       kube-apiserver-multinode-168922
	
	
	==> coredns [0dbbd014fafc407cf48be9bd50c7cfba0763c7263fee3707ee927c58bf111dda] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45850 - 49312 "HINFO IN 6346308860311285144.4892433205018066843. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011916744s
	
	
	==> coredns [1d5f30bd1d002a730cdd241b84c9414dd9b48efe88343a421eee1fd3deaa2adf] <==
	[INFO] 10.244.0.3:38946 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001763349s
	[INFO] 10.244.0.3:46500 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096885s
	[INFO] 10.244.0.3:56452 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000044097s
	[INFO] 10.244.0.3:46737 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001048857s
	[INFO] 10.244.0.3:37993 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119866s
	[INFO] 10.244.0.3:45844 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000041729s
	[INFO] 10.244.0.3:54320 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066208s
	[INFO] 10.244.1.2:48237 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128465s
	[INFO] 10.244.1.2:46842 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077743s
	[INFO] 10.244.1.2:43073 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000188905s
	[INFO] 10.244.1.2:60806 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067102s
	[INFO] 10.244.0.3:42715 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161353s
	[INFO] 10.244.0.3:33886 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007964s
	[INFO] 10.244.0.3:47500 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005551s
	[INFO] 10.244.0.3:54660 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072009s
	[INFO] 10.244.1.2:34764 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142289s
	[INFO] 10.244.1.2:49999 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156371s
	[INFO] 10.244.1.2:56180 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000178092s
	[INFO] 10.244.1.2:41725 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111127s
	[INFO] 10.244.0.3:33904 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123923s
	[INFO] 10.244.0.3:37785 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000048683s
	[INFO] 10.244.0.3:38463 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000042744s
	[INFO] 10.244.0.3:40587 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000026774s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-168922
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-168922
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=multinode-168922
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T17_42_00_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:41:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-168922
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:52:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:48:38 +0000   Wed, 28 Aug 2024 17:41:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:48:38 +0000   Wed, 28 Aug 2024 17:41:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:48:38 +0000   Wed, 28 Aug 2024 17:41:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:48:38 +0000   Wed, 28 Aug 2024 17:42:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.123
	  Hostname:    multinode-168922
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 893dedabe52d4c32aacf04c2fe93fe01
	  System UUID:                893dedab-e52d-4c32-aacf-04c2fe93fe01
	  Boot ID:                    c015ee5d-f4eb-4aa8-927b-878dcd67f40e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-w6glt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kube-system                 coredns-6f6b679f8f-6r6bx                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-168922                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-x4zf2                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-168922             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-168922    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-476qk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-168922             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-168922 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-168922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-168922 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-168922 event: Registered Node multinode-168922 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-168922 status is now: NodeReady
	  Normal  Starting                 4m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node multinode-168922 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node multinode-168922 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node multinode-168922 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m2s                 node-controller  Node multinode-168922 event: Registered Node multinode-168922 in Controller
	
	
	Name:               multinode-168922-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-168922-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=multinode-168922
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_28T17_49_19_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:49:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-168922-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:50:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 28 Aug 2024 17:49:49 +0000   Wed, 28 Aug 2024 17:51:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 28 Aug 2024 17:49:49 +0000   Wed, 28 Aug 2024 17:51:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 28 Aug 2024 17:49:49 +0000   Wed, 28 Aug 2024 17:51:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 28 Aug 2024 17:49:49 +0000   Wed, 28 Aug 2024 17:51:01 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.88
	  Hostname:    multinode-168922-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 310348f5dff14d28a2b7e382628a42ce
	  System UUID:                310348f5-dff1-4d28-a2b7-e382628a42ce
	  Boot ID:                    9f706423-b9e2-466e-a8e4-0097c0758b92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-45tvk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 kindnet-h7clw              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m56s
	  kube-system                 kube-proxy-z6fk7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m51s                  kube-proxy       
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    9m56s (x2 over 9m56s)  kubelet          Node multinode-168922-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m56s (x2 over 9m56s)  kubelet          Node multinode-168922-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m56s (x2 over 9m56s)  kubelet          Node multinode-168922-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 9m56s                  kubelet          Starting kubelet.
	  Normal  NodeReady                9m36s                  kubelet          Node multinode-168922-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-168922-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-168922-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-168922-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-168922-m02 status is now: NodeReady
	  Normal  NodeNotReady             102s                   node-controller  Node multinode-168922-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060442] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052529] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.186620] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.125003] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.273796] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +3.757692] systemd-fstab-generator[749]: Ignoring "noauto" option for root device
	[  +4.017842] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.058820] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.994664] systemd-fstab-generator[1213]: Ignoring "noauto" option for root device
	[  +0.087752] kauditd_printk_skb: 69 callbacks suppressed
	[Aug28 17:42] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.088666] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.021200] kauditd_printk_skb: 65 callbacks suppressed
	[Aug28 17:43] kauditd_printk_skb: 14 callbacks suppressed
	[Aug28 17:48] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.145114] systemd-fstab-generator[2678]: Ignoring "noauto" option for root device
	[  +0.170124] systemd-fstab-generator[2693]: Ignoring "noauto" option for root device
	[  +0.133913] systemd-fstab-generator[2705]: Ignoring "noauto" option for root device
	[  +0.269141] systemd-fstab-generator[2733]: Ignoring "noauto" option for root device
	[  +0.631840] systemd-fstab-generator[2827]: Ignoring "noauto" option for root device
	[  +2.098237] systemd-fstab-generator[2980]: Ignoring "noauto" option for root device
	[  +5.665904] kauditd_printk_skb: 184 callbacks suppressed
	[  +7.431810] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.124744] systemd-fstab-generator[3797]: Ignoring "noauto" option for root device
	[Aug28 17:49] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [2ef2cad6516baabe33a0fbd81b95286afe49a3b207930d675cce0454891a6b31] <==
	{"level":"info","ts":"2024-08-28T17:48:35.490281Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b780dcaae8448687","local-member-id":"4c9b6dd9118b591e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:48:35.490321Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:48:35.492995Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:48:35.514575Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-28T17:48:35.514862Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4c9b6dd9118b591e","initial-advertise-peer-urls":["https://192.168.39.123:2380"],"listen-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-28T17:48:35.514904Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-28T17:48:35.515020Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-08-28T17:48:35.515042Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-08-28T17:48:37.231634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-28T17:48:37.231686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-28T17:48:37.231725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgPreVoteResp from 4c9b6dd9118b591e at term 2"}
	{"level":"info","ts":"2024-08-28T17:48:37.231741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became candidate at term 3"}
	{"level":"info","ts":"2024-08-28T17:48:37.231746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e received MsgVoteResp from 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-08-28T17:48:37.231766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4c9b6dd9118b591e became leader at term 3"}
	{"level":"info","ts":"2024-08-28T17:48:37.231773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4c9b6dd9118b591e elected leader 4c9b6dd9118b591e at term 3"}
	{"level":"info","ts":"2024-08-28T17:48:37.236861Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:48:37.237127Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:48:37.236862Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4c9b6dd9118b591e","local-member-attributes":"{Name:multinode-168922 ClientURLs:[https://192.168.39.123:2379]}","request-path":"/0/members/4c9b6dd9118b591e/attributes","cluster-id":"b780dcaae8448687","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T17:48:37.237548Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T17:48:37.237586Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T17:48:37.238065Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:48:37.238240Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:48:37.239051Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.123:2379"}
	{"level":"info","ts":"2024-08-28T17:48:37.239286Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-28T17:50:02.093468Z","caller":"traceutil/trace.go:171","msg":"trace[2062693631] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"126.420516ms","start":"2024-08-28T17:50:01.966968Z","end":"2024-08-28T17:50:02.093388Z","steps":["trace[2062693631] 'process raft request'  (duration: 104.017473ms)","trace[2062693631] 'compare'  (duration: 22.043822ms)"],"step_count":2}
	
	
	==> etcd [c8e59f37886db79dd4ba2de6f8772915db5d0079e9b9cea045bc8fbad8fba044] <==
	{"level":"info","ts":"2024-08-28T17:41:55.322013Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:41:55.322660Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T17:41:55.323315Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.123:2379"}
	{"level":"info","ts":"2024-08-28T17:41:55.336479Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T17:41:55.336561Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T17:42:47.842684Z","caller":"traceutil/trace.go:171","msg":"trace[1823284932] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"228.824725ms","start":"2024-08-28T17:42:47.613839Z","end":"2024-08-28T17:42:47.842663Z","steps":["trace[1823284932] 'process raft request'  (duration: 215.48225ms)","trace[1823284932] 'compare'  (duration: 13.068972ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-28T17:42:47.842668Z","caller":"traceutil/trace.go:171","msg":"trace[1346513069] linearizableReadLoop","detail":"{readStateIndex:456; appliedIndex:455; }","duration":"226.914229ms","start":"2024-08-28T17:42:47.615711Z","end":"2024-08-28T17:42:47.842625Z","steps":["trace[1346513069] 'read index received'  (duration: 213.566092ms)","trace[1346513069] 'applied index is now lower than readState.Index'  (duration: 13.346933ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T17:42:47.842821Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.087554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-168922-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T17:42:47.843039Z","caller":"traceutil/trace.go:171","msg":"trace[788604680] range","detail":"{range_begin:/registry/csinodes/multinode-168922-m02; range_end:; response_count:0; response_revision:439; }","duration":"227.341163ms","start":"2024-08-28T17:42:47.615685Z","end":"2024-08-28T17:42:47.843026Z","steps":["trace[788604680] 'agreement among raft nodes before linearized reading'  (duration: 227.032084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:42:47.843164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.986123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-168922-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T17:42:47.843197Z","caller":"traceutil/trace.go:171","msg":"trace[612802627] range","detail":"{range_begin:/registry/minions/multinode-168922-m02; range_end:; response_count:0; response_revision:439; }","duration":"217.025739ms","start":"2024-08-28T17:42:47.626166Z","end":"2024-08-28T17:42:47.843192Z","steps":["trace[612802627] 'agreement among raft nodes before linearized reading'  (duration: 216.974528ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T17:42:47.843347Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.289657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T17:42:47.845070Z","caller":"traceutil/trace.go:171","msg":"trace[858011392] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:439; }","duration":"179.012278ms","start":"2024-08-28T17:42:47.666045Z","end":"2024-08-28T17:42:47.845058Z","steps":["trace[858011392] 'agreement among raft nodes before linearized reading'  (duration: 177.269946ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:43:44.051107Z","caller":"traceutil/trace.go:171","msg":"trace[1894492021] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"135.577124ms","start":"2024-08-28T17:43:43.915492Z","end":"2024-08-28T17:43:44.051069Z","steps":["trace[1894492021] 'process raft request'  (duration: 131.536552ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:44:42.498043Z","caller":"traceutil/trace.go:171","msg":"trace[1597191940] transaction","detail":"{read_only:false; response_revision:714; number_of_response:1; }","duration":"108.333359ms","start":"2024-08-28T17:44:42.389681Z","end":"2024-08-28T17:44:42.498015Z","steps":["trace[1597191940] 'process raft request'  (duration: 107.998423ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T17:46:59.394169Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-28T17:46:59.394325Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-168922","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"]}
	{"level":"warn","ts":"2024-08-28T17:46:59.400746Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-28T17:46:59.400893Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-28T17:46:59.479790Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.123:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-28T17:46:59.479846Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.123:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-28T17:46:59.479914Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4c9b6dd9118b591e","current-leader-member-id":"4c9b6dd9118b591e"}
	{"level":"info","ts":"2024-08-28T17:46:59.482629Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-08-28T17:46:59.482737Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.123:2380"}
	{"level":"info","ts":"2024-08-28T17:46:59.482746Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-168922","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.123:2380"],"advertise-client-urls":["https://192.168.39.123:2379"]}
	
	
	==> kernel <==
	 17:52:43 up 11 min,  0 users,  load average: 0.16, 0.14, 0.09
	Linux multinode-168922 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0c4a9751041545243372fe3c7ebd584d3d631028f8c8860b143eb68ebb8b8c88] <==
	I0828 17:51:41.100178       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:51:51.108493       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:51:51.108629       1 main.go:299] handling current node
	I0828 17:51:51.108714       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:51:51.108740       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:52:01.099856       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:52:01.100127       1 main.go:299] handling current node
	I0828 17:52:01.100165       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:52:01.100183       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:52:11.099174       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:52:11.099290       1 main.go:299] handling current node
	I0828 17:52:11.099326       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:52:11.099333       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:52:21.107651       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:52:21.107784       1 main.go:299] handling current node
	I0828 17:52:21.107827       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:52:21.107846       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:52:31.107534       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:52:31.107580       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:52:31.107714       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:52:31.107735       1 main.go:299] handling current node
	I0828 17:52:41.099526       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:52:41.099570       1 main.go:299] handling current node
	I0828 17:52:41.099587       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:52:41.099593       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [5cb61f5b3dfed66be8c4188670e01132d6f0bcad0c5a66f3349e4dc63a4e9df9] <==
	I0828 17:46:09.509378       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	I0828 17:46:19.513653       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:46:19.513842       1 main.go:299] handling current node
	I0828 17:46:19.513899       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:46:19.513918       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:46:19.514093       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:46:19.514117       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	I0828 17:46:29.510483       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:46:29.510602       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:46:29.510818       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:46:29.510858       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	I0828 17:46:29.510933       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:46:29.510952       1 main.go:299] handling current node
	I0828 17:46:39.512671       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:46:39.512715       1 main.go:299] handling current node
	I0828 17:46:39.512740       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:46:39.512746       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:46:39.512893       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:46:39.512913       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	I0828 17:46:49.517529       1 main.go:295] Handling node with IPs: map[192.168.39.123:{}]
	I0828 17:46:49.517581       1 main.go:299] handling current node
	I0828 17:46:49.517595       1 main.go:295] Handling node with IPs: map[192.168.39.88:{}]
	I0828 17:46:49.517600       1 main.go:322] Node multinode-168922-m02 has CIDR [10.244.1.0/24] 
	I0828 17:46:49.517774       1 main.go:295] Handling node with IPs: map[192.168.39.13:{}]
	I0828 17:46:49.517780       1 main.go:322] Node multinode-168922-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6ca1265a851f1908e722808b6c37038f29cf7647e050178e3032d9e098677ffb] <==
	W0828 17:41:58.342634       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.123]
	I0828 17:41:58.343596       1 controller.go:615] quota admission added evaluator for: endpoints
	I0828 17:41:58.352557       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0828 17:41:58.703893       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0828 17:41:59.330301       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0828 17:41:59.345569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0828 17:41:59.358263       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0828 17:42:04.057288       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0828 17:42:04.459231       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0828 17:43:14.354376       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36792: use of closed network connection
	E0828 17:43:14.520735       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36816: use of closed network connection
	E0828 17:43:14.685161       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36844: use of closed network connection
	E0828 17:43:14.848621       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36860: use of closed network connection
	E0828 17:43:15.006390       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36882: use of closed network connection
	E0828 17:43:15.172086       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36890: use of closed network connection
	E0828 17:43:15.438132       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36926: use of closed network connection
	E0828 17:43:15.597985       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:36930: use of closed network connection
	E0828 17:43:15.762140       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:56170: use of closed network connection
	E0828 17:43:15.932517       1 conn.go:339] Error on socket receive: read tcp 192.168.39.123:8443->192.168.39.1:56180: use of closed network connection
	I0828 17:46:59.395852       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0828 17:46:59.418262       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 17:46:59.437573       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 17:46:59.437640       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 17:46:59.437680       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 17:46:59.437880       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f12d2f20d93694ecb3fd1d1fef2816498d5b36464aee32903123e2862ac647f4] <==
	I0828 17:48:38.519753       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0828 17:48:38.545246       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0828 17:48:38.545327       1 policy_source.go:224] refreshing policies
	I0828 17:48:38.549509       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0828 17:48:38.551687       1 shared_informer.go:320] Caches are synced for configmaps
	E0828 17:48:38.552138       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0828 17:48:38.555041       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0828 17:48:38.556573       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0828 17:48:38.559603       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0828 17:48:38.559501       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0828 17:48:38.559515       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0828 17:48:38.565772       1 aggregator.go:171] initial CRD sync complete...
	I0828 17:48:38.565810       1 autoregister_controller.go:144] Starting autoregister controller
	I0828 17:48:38.565817       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0828 17:48:38.565823       1 cache.go:39] Caches are synced for autoregister controller
	I0828 17:48:38.591801       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0828 17:48:38.611534       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0828 17:48:39.426026       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0828 17:48:40.727704       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0828 17:48:40.856629       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0828 17:48:40.871185       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0828 17:48:40.939398       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0828 17:48:40.945529       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0828 17:48:41.974663       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0828 17:48:42.225664       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [55546ecd55f3c4b17333be2673515a3d246845ded74f15ec20b02a557ea1d5c5] <==
	I0828 17:44:33.189336       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m02"
	I0828 17:44:34.333786       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m02"
	I0828 17:44:34.333861       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-168922-m03\" does not exist"
	I0828 17:44:34.363363       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-168922-m03" podCIDRs=["10.244.3.0/24"]
	I0828 17:44:34.363533       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	E0828 17:44:34.363773       1 range_allocator.go:410] "Node already has a CIDR allocated. Releasing the new one" logger="node-ipam-controller" node="multinode-168922-m03" podCIDRs=["10.244.3.0/24"]
	I0828 17:44:34.363797       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:34.364086       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:34.709495       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:35.045367       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:38.535267       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:44.438401       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:54.057950       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m02"
	I0828 17:44:54.058107       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:54.069787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:44:58.515376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:45:33.531248       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m02"
	I0828 17:45:33.531531       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m03"
	I0828 17:45:33.550864       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m02"
	I0828 17:45:33.580689       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.654595ms"
	I0828 17:45:33.580942       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.52µs"
	I0828 17:45:38.581620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:45:38.596910       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:45:38.613211       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m02"
	I0828 17:45:48.681029       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	
	
	==> kube-controller-manager [ddba4190ea9062bbb5115b6a53799d288d83f71c7eac29e774706cc7b278f35e] <==
	E0828 17:49:57.589531       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-168922-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-168922-m03" podCIDRs=["10.244.3.0/24"]
	E0828 17:49:57.589625       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-168922-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-168922-m03"
	E0828 17:49:57.590006       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'multinode-168922-m03': failed to patch node CIDR: Node \"multinode-168922-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0828 17:49:57.590082       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:49:57.595292       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:49:57.942914       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:02.096365       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:07.703000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:17.383581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:17.384203       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m02"
	I0828 17:50:17.397626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:21.926943       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:21.938565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:21.979193       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:22.401617       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m03"
	I0828 17:50:22.401729       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-168922-m02"
	I0828 17:51:01.997636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m02"
	I0828 17:51:02.020615       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m02"
	I0828 17:51:02.024891       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.611221ms"
	I0828 17:51:02.025544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.679µs"
	I0828 17:51:07.108958       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-168922-m02"
	I0828 17:51:41.940554       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mfl7g"
	I0828 17:51:41.964338       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-mfl7g"
	I0828 17:51:41.964375       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5ct7d"
	I0828 17:51:42.007716       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-5ct7d"
	
	
	==> kube-proxy [9e3d7d32be0367363a0f35c0591ded943742c404d34cadf7c164bccb8bf288de] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 17:42:05.355860       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 17:42:05.363996       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	E0828 17:42:05.364191       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 17:42:05.398626       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 17:42:05.398705       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 17:42:05.398733       1 server_linux.go:169] "Using iptables Proxier"
	I0828 17:42:05.401093       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 17:42:05.401590       1 server.go:483] "Version info" version="v1.31.0"
	I0828 17:42:05.401615       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:42:05.403077       1 config.go:197] "Starting service config controller"
	I0828 17:42:05.403121       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 17:42:05.403142       1 config.go:104] "Starting endpoint slice config controller"
	I0828 17:42:05.403158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 17:42:05.403681       1 config.go:326] "Starting node config controller"
	I0828 17:42:05.403703       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 17:42:05.503393       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 17:42:05.503501       1 shared_informer.go:320] Caches are synced for service config
	I0828 17:42:05.503729       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b546b15c13c29ae1cc0717223026cba99879f710b63603c0b5954dfb352e313c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 17:48:40.402580       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 17:48:40.425334       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.123"]
	E0828 17:48:40.425405       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 17:48:40.539562       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 17:48:40.539621       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 17:48:40.539649       1 server_linux.go:169] "Using iptables Proxier"
	I0828 17:48:40.557848       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 17:48:40.558099       1 server.go:483] "Version info" version="v1.31.0"
	I0828 17:48:40.558124       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:48:40.565628       1 config.go:197] "Starting service config controller"
	I0828 17:48:40.565667       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 17:48:40.565689       1 config.go:104] "Starting endpoint slice config controller"
	I0828 17:48:40.565693       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 17:48:40.566109       1 config.go:326] "Starting node config controller"
	I0828 17:48:40.566135       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 17:48:40.665801       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 17:48:40.665883       1 shared_informer.go:320] Caches are synced for service config
	I0828 17:48:40.666206       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1e68bf808c05d70fb6ffe47cbe318aee5b6b17f01b97162807417dc5b868afb3] <==
	E0828 17:41:56.742108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:56.742147       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 17:41:56.742171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:56.742254       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0828 17:41:56.742278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:56.742342       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 17:41:56.742365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:56.742377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0828 17:41:56.742384       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:56.742409       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0828 17:41:56.743578       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 17:41:56.743644       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0828 17:41:56.744473       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:57.761610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 17:41:57.761666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:57.772364       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0828 17:41:57.772407       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:57.975304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 17:41:57.975375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:58.007618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 17:41:58.007694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 17:41:58.144941       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 17:41:58.145086       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0828 17:42:01.328481       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0828 17:46:59.397060       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [20e895bc0bcc591080b27929ee9cd16e8fd286eb028308b99edd01abd5c03e87] <==
	I0828 17:48:35.948806       1 serving.go:386] Generated self-signed cert in-memory
	W0828 17:48:38.515703       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 17:48:38.515800       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 17:48:38.515829       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 17:48:38.515859       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 17:48:38.541974       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0828 17:48:38.542079       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:48:38.549112       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0828 17:48:38.549331       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 17:48:38.549726       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 17:48:38.549822       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0828 17:48:38.650586       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 17:51:24 multinode-168922 kubelet[2987]: E0828 17:51:24.533655    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867484532567701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:51:34 multinode-168922 kubelet[2987]: E0828 17:51:34.497336    2987 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 17:51:34 multinode-168922 kubelet[2987]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 17:51:34 multinode-168922 kubelet[2987]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 17:51:34 multinode-168922 kubelet[2987]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:51:34 multinode-168922 kubelet[2987]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:51:34 multinode-168922 kubelet[2987]: E0828 17:51:34.536154    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867494535805103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:51:34 multinode-168922 kubelet[2987]: E0828 17:51:34.536179    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867494535805103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:51:44 multinode-168922 kubelet[2987]: E0828 17:51:44.537872    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867504537383060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:51:44 multinode-168922 kubelet[2987]: E0828 17:51:44.537959    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867504537383060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:51:54 multinode-168922 kubelet[2987]: E0828 17:51:54.540736    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867514539712942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:51:54 multinode-168922 kubelet[2987]: E0828 17:51:54.540806    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867514539712942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:52:04 multinode-168922 kubelet[2987]: E0828 17:52:04.543174    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867524542818608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:52:04 multinode-168922 kubelet[2987]: E0828 17:52:04.543200    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867524542818608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:52:14 multinode-168922 kubelet[2987]: E0828 17:52:14.544643    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867534544281700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:52:14 multinode-168922 kubelet[2987]: E0828 17:52:14.544668    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867534544281700,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:52:24 multinode-168922 kubelet[2987]: E0828 17:52:24.546674    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867544546236417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:52:24 multinode-168922 kubelet[2987]: E0828 17:52:24.546706    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867544546236417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:52:34 multinode-168922 kubelet[2987]: E0828 17:52:34.498171    2987 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 17:52:34 multinode-168922 kubelet[2987]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 17:52:34 multinode-168922 kubelet[2987]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 17:52:34 multinode-168922 kubelet[2987]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 17:52:34 multinode-168922 kubelet[2987]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 17:52:34 multinode-168922 kubelet[2987]: E0828 17:52:34.548631    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867554548226319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 17:52:34 multinode-168922 kubelet[2987]: E0828 17:52:34.548668    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724867554548226319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 17:52:43.147540   49409 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19529-10317/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-168922 -n multinode-168922
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-168922 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.14s)

                                                
                                    
x
+
TestPreload (222.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-781179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0828 17:58:00.240233   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-781179 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m21.550161929s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-781179 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-781179 image pull gcr.io/k8s-minikube/busybox: (3.336894194s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-781179
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-781179: (6.602900002s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-781179 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0828 17:59:23.525582   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-781179 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m8.078339672s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-781179 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-08-28 18:00:04.889737343 +0000 UTC m=+4123.940299324
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-781179 -n test-preload-781179
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-781179 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-781179 logs -n 25: (1.015654448s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n multinode-168922 sudo cat                                       | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /home/docker/cp-test_multinode-168922-m03_multinode-168922.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-168922 cp multinode-168922-m03:/home/docker/cp-test.txt                       | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m02:/home/docker/cp-test_multinode-168922-m03_multinode-168922-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n                                                                 | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | multinode-168922-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-168922 ssh -n multinode-168922-m02 sudo cat                                   | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | /home/docker/cp-test_multinode-168922-m03_multinode-168922-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-168922 node stop m03                                                          | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	| node    | multinode-168922 node start                                                             | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC | 28 Aug 24 17:44 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-168922                                                                | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC |                     |
	| stop    | -p multinode-168922                                                                     | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:44 UTC |                     |
	| start   | -p multinode-168922                                                                     | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:46 UTC | 28 Aug 24 17:50 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-168922                                                                | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:50 UTC |                     |
	| node    | multinode-168922 node delete                                                            | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:50 UTC | 28 Aug 24 17:50 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-168922 stop                                                                   | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:50 UTC |                     |
	| start   | -p multinode-168922                                                                     | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:52 UTC | 28 Aug 24 17:55 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-168922                                                                | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:55 UTC |                     |
	| start   | -p multinode-168922-m02                                                                 | multinode-168922-m02 | jenkins | v1.33.1 | 28 Aug 24 17:55 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-168922-m03                                                                 | multinode-168922-m03 | jenkins | v1.33.1 | 28 Aug 24 17:55 UTC | 28 Aug 24 17:56 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-168922                                                                 | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:56 UTC |                     |
	| delete  | -p multinode-168922-m03                                                                 | multinode-168922-m03 | jenkins | v1.33.1 | 28 Aug 24 17:56 UTC | 28 Aug 24 17:56 UTC |
	| delete  | -p multinode-168922                                                                     | multinode-168922     | jenkins | v1.33.1 | 28 Aug 24 17:56 UTC | 28 Aug 24 17:56 UTC |
	| start   | -p test-preload-781179                                                                  | test-preload-781179  | jenkins | v1.33.1 | 28 Aug 24 17:56 UTC | 28 Aug 24 17:58 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-781179 image pull                                                          | test-preload-781179  | jenkins | v1.33.1 | 28 Aug 24 17:58 UTC | 28 Aug 24 17:58 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-781179                                                                  | test-preload-781179  | jenkins | v1.33.1 | 28 Aug 24 17:58 UTC | 28 Aug 24 17:58 UTC |
	| start   | -p test-preload-781179                                                                  | test-preload-781179  | jenkins | v1.33.1 | 28 Aug 24 17:58 UTC | 28 Aug 24 18:00 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-781179 image list                                                          | test-preload-781179  | jenkins | v1.33.1 | 28 Aug 24 18:00 UTC | 28 Aug 24 18:00 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 17:58:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 17:58:56.640400   51925 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:58:56.640503   51925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:58:56.640512   51925 out.go:358] Setting ErrFile to fd 2...
	I0828 17:58:56.640516   51925 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:58:56.641052   51925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:58:56.641915   51925 out.go:352] Setting JSON to false
	I0828 17:58:56.642860   51925 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6083,"bootTime":1724861854,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 17:58:56.642921   51925 start.go:139] virtualization: kvm guest
	I0828 17:58:56.644483   51925 out.go:177] * [test-preload-781179] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 17:58:56.645848   51925 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:58:56.645882   51925 notify.go:220] Checking for updates...
	I0828 17:58:56.647970   51925 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:58:56.649225   51925 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:58:56.650403   51925 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:58:56.651415   51925 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 17:58:56.652430   51925 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:58:56.653905   51925 config.go:182] Loaded profile config "test-preload-781179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0828 17:58:56.654329   51925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:58:56.654385   51925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:58:56.669142   51925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
	I0828 17:58:56.669500   51925 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:58:56.670023   51925 main.go:141] libmachine: Using API Version  1
	I0828 17:58:56.670051   51925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:58:56.670357   51925 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:58:56.670576   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	I0828 17:58:56.672147   51925 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 17:58:56.673448   51925 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:58:56.673722   51925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:58:56.673752   51925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:58:56.687972   51925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38679
	I0828 17:58:56.688533   51925 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:58:56.689096   51925 main.go:141] libmachine: Using API Version  1
	I0828 17:58:56.689116   51925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:58:56.689458   51925 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:58:56.689632   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	I0828 17:58:56.724903   51925 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 17:58:56.726020   51925 start.go:297] selected driver: kvm2
	I0828 17:58:56.726037   51925 start.go:901] validating driver "kvm2" against &{Name:test-preload-781179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-781179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:58:56.726173   51925 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:58:56.726812   51925 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:58:56.726905   51925 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 17:58:56.741453   51925 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 17:58:56.741749   51925 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 17:58:56.741811   51925 cni.go:84] Creating CNI manager for ""
	I0828 17:58:56.741820   51925 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 17:58:56.741865   51925 start.go:340] cluster config:
	{Name:test-preload-781179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-781179 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:58:56.741950   51925 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 17:58:56.743523   51925 out.go:177] * Starting "test-preload-781179" primary control-plane node in "test-preload-781179" cluster
	I0828 17:58:56.744766   51925 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0828 17:58:56.842855   51925 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0828 17:58:56.842899   51925 cache.go:56] Caching tarball of preloaded images
	I0828 17:58:56.843057   51925 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0828 17:58:56.844895   51925 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0828 17:58:56.846021   51925 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0828 17:58:57.089960   51925 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0828 17:59:08.489926   51925 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0828 17:59:08.490017   51925 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0828 17:59:09.330464   51925 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0828 17:59:09.330581   51925 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/config.json ...
	I0828 17:59:09.330805   51925 start.go:360] acquireMachinesLock for test-preload-781179: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 17:59:09.330862   51925 start.go:364] duration metric: took 37.352µs to acquireMachinesLock for "test-preload-781179"
	I0828 17:59:09.330878   51925 start.go:96] Skipping create...Using existing machine configuration
	I0828 17:59:09.330887   51925 fix.go:54] fixHost starting: 
	I0828 17:59:09.331210   51925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:59:09.331245   51925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:59:09.345881   51925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45083
	I0828 17:59:09.346279   51925 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:59:09.346771   51925 main.go:141] libmachine: Using API Version  1
	I0828 17:59:09.346793   51925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:59:09.347096   51925 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:59:09.347294   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	I0828 17:59:09.347466   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetState
	I0828 17:59:09.349294   51925 fix.go:112] recreateIfNeeded on test-preload-781179: state=Stopped err=<nil>
	I0828 17:59:09.349332   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	W0828 17:59:09.349484   51925 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 17:59:09.351701   51925 out.go:177] * Restarting existing kvm2 VM for "test-preload-781179" ...
	I0828 17:59:09.352879   51925 main.go:141] libmachine: (test-preload-781179) Calling .Start
	I0828 17:59:09.353031   51925 main.go:141] libmachine: (test-preload-781179) Ensuring networks are active...
	I0828 17:59:09.353804   51925 main.go:141] libmachine: (test-preload-781179) Ensuring network default is active
	I0828 17:59:09.354123   51925 main.go:141] libmachine: (test-preload-781179) Ensuring network mk-test-preload-781179 is active
	I0828 17:59:09.354511   51925 main.go:141] libmachine: (test-preload-781179) Getting domain xml...
	I0828 17:59:09.355265   51925 main.go:141] libmachine: (test-preload-781179) Creating domain...
	I0828 17:59:10.544127   51925 main.go:141] libmachine: (test-preload-781179) Waiting to get IP...
	I0828 17:59:10.544834   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:10.545196   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:10.545245   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:10.545181   52008 retry.go:31] will retry after 214.40323ms: waiting for machine to come up
	I0828 17:59:10.761593   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:10.761982   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:10.762008   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:10.761941   52008 retry.go:31] will retry after 363.582384ms: waiting for machine to come up
	I0828 17:59:11.127516   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:11.127955   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:11.127981   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:11.127919   52008 retry.go:31] will retry after 307.736077ms: waiting for machine to come up
	I0828 17:59:11.437337   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:11.437733   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:11.437761   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:11.437690   52008 retry.go:31] will retry after 585.542064ms: waiting for machine to come up
	I0828 17:59:12.024276   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:12.024693   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:12.024719   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:12.024636   52008 retry.go:31] will retry after 506.618034ms: waiting for machine to come up
	I0828 17:59:12.533313   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:12.533678   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:12.533710   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:12.533619   52008 retry.go:31] will retry after 610.584322ms: waiting for machine to come up
	I0828 17:59:13.145400   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:13.145818   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:13.145850   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:13.145770   52008 retry.go:31] will retry after 1.167708889s: waiting for machine to come up
	I0828 17:59:14.315064   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:14.315479   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:14.315504   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:14.315434   52008 retry.go:31] will retry after 1.031279282s: waiting for machine to come up
	I0828 17:59:15.348019   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:15.348447   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:15.348484   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:15.348415   52008 retry.go:31] will retry after 1.376241054s: waiting for machine to come up
	I0828 17:59:16.727092   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:16.727476   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:16.727502   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:16.727444   52008 retry.go:31] will retry after 2.296481043s: waiting for machine to come up
	I0828 17:59:19.025356   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:19.025765   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:19.025788   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:19.025727   52008 retry.go:31] will retry after 2.129796127s: waiting for machine to come up
	I0828 17:59:21.157552   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:21.157930   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:21.157951   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:21.157893   52008 retry.go:31] will retry after 2.397261207s: waiting for machine to come up
	I0828 17:59:23.556937   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:23.557256   51925 main.go:141] libmachine: (test-preload-781179) DBG | unable to find current IP address of domain test-preload-781179 in network mk-test-preload-781179
	I0828 17:59:23.557279   51925 main.go:141] libmachine: (test-preload-781179) DBG | I0828 17:59:23.557211   52008 retry.go:31] will retry after 4.448477716s: waiting for machine to come up
	I0828 17:59:28.006797   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.007223   51925 main.go:141] libmachine: (test-preload-781179) Found IP for machine: 192.168.39.175
	I0828 17:59:28.007247   51925 main.go:141] libmachine: (test-preload-781179) Reserving static IP address...
	I0828 17:59:28.007264   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has current primary IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.007641   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "test-preload-781179", mac: "52:54:00:2f:16:fe", ip: "192.168.39.175"} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:28.007681   51925 main.go:141] libmachine: (test-preload-781179) DBG | skip adding static IP to network mk-test-preload-781179 - found existing host DHCP lease matching {name: "test-preload-781179", mac: "52:54:00:2f:16:fe", ip: "192.168.39.175"}
	I0828 17:59:28.007695   51925 main.go:141] libmachine: (test-preload-781179) Reserved static IP address: 192.168.39.175
	I0828 17:59:28.007716   51925 main.go:141] libmachine: (test-preload-781179) Waiting for SSH to be available...
	I0828 17:59:28.007730   51925 main.go:141] libmachine: (test-preload-781179) DBG | Getting to WaitForSSH function...
	I0828 17:59:28.009433   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.009836   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:28.009863   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.009992   51925 main.go:141] libmachine: (test-preload-781179) DBG | Using SSH client type: external
	I0828 17:59:28.010019   51925 main.go:141] libmachine: (test-preload-781179) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/test-preload-781179/id_rsa (-rw-------)
	I0828 17:59:28.010064   51925 main.go:141] libmachine: (test-preload-781179) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.175 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/test-preload-781179/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 17:59:28.010104   51925 main.go:141] libmachine: (test-preload-781179) DBG | About to run SSH command:
	I0828 17:59:28.010125   51925 main.go:141] libmachine: (test-preload-781179) DBG | exit 0
	I0828 17:59:28.134127   51925 main.go:141] libmachine: (test-preload-781179) DBG | SSH cmd err, output: <nil>: 
	I0828 17:59:28.134424   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetConfigRaw
	I0828 17:59:28.134990   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetIP
	I0828 17:59:28.137384   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.137708   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:28.137737   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.137926   51925 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/config.json ...
	I0828 17:59:28.138130   51925 machine.go:93] provisionDockerMachine start ...
	I0828 17:59:28.138145   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	I0828 17:59:28.138367   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:28.140499   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.140766   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:28.140782   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.140924   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHPort
	I0828 17:59:28.141084   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:28.141278   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:28.141393   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHUsername
	I0828 17:59:28.141581   51925 main.go:141] libmachine: Using SSH client type: native
	I0828 17:59:28.141818   51925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0828 17:59:28.141835   51925 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 17:59:28.246183   51925 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 17:59:28.246214   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetMachineName
	I0828 17:59:28.246501   51925 buildroot.go:166] provisioning hostname "test-preload-781179"
	I0828 17:59:28.246523   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetMachineName
	I0828 17:59:28.246732   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:28.249351   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.249728   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:28.249763   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.249857   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHPort
	I0828 17:59:28.250039   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:28.250219   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:28.250390   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHUsername
	I0828 17:59:28.250589   51925 main.go:141] libmachine: Using SSH client type: native
	I0828 17:59:28.250772   51925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0828 17:59:28.250784   51925 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-781179 && echo "test-preload-781179" | sudo tee /etc/hostname
	I0828 17:59:28.367355   51925 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-781179
	
	I0828 17:59:28.367391   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:28.369908   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.370271   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:28.370301   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.370443   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHPort
	I0828 17:59:28.370631   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:28.370789   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:28.370922   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHUsername
	I0828 17:59:28.371072   51925 main.go:141] libmachine: Using SSH client type: native
	I0828 17:59:28.371231   51925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0828 17:59:28.371247   51925 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-781179' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-781179/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-781179' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 17:59:28.482053   51925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 17:59:28.482106   51925 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 17:59:28.482127   51925 buildroot.go:174] setting up certificates
	I0828 17:59:28.482136   51925 provision.go:84] configureAuth start
	I0828 17:59:28.482145   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetMachineName
	I0828 17:59:28.482504   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetIP
	I0828 17:59:28.484954   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.485317   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:28.485350   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.485465   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:28.487676   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.487999   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:28.488033   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.488157   51925 provision.go:143] copyHostCerts
	I0828 17:59:28.488217   51925 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 17:59:28.488227   51925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 17:59:28.488289   51925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 17:59:28.488399   51925 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 17:59:28.488410   51925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 17:59:28.488435   51925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 17:59:28.488496   51925 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 17:59:28.488503   51925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 17:59:28.488523   51925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 17:59:28.488582   51925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.test-preload-781179 san=[127.0.0.1 192.168.39.175 localhost minikube test-preload-781179]
	I0828 17:59:28.650238   51925 provision.go:177] copyRemoteCerts
	I0828 17:59:28.650298   51925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 17:59:28.650323   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:28.652936   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.653306   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:28.653336   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.653514   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHPort
	I0828 17:59:28.653725   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:28.653889   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHUsername
	I0828 17:59:28.654016   51925 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/test-preload-781179/id_rsa Username:docker}
	I0828 17:59:28.736037   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 17:59:28.758565   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0828 17:59:28.780353   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 17:59:28.802012   51925 provision.go:87] duration metric: took 319.866507ms to configureAuth
	I0828 17:59:28.802039   51925 buildroot.go:189] setting minikube options for container-runtime
	I0828 17:59:28.802208   51925 config.go:182] Loaded profile config "test-preload-781179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0828 17:59:28.802272   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:28.804993   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.805322   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:28.805348   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:28.805556   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHPort
	I0828 17:59:28.805742   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:28.805903   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:28.806039   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHUsername
	I0828 17:59:28.806200   51925 main.go:141] libmachine: Using SSH client type: native
	I0828 17:59:28.806375   51925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0828 17:59:28.806396   51925 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 17:59:29.025976   51925 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 17:59:29.026002   51925 machine.go:96] duration metric: took 887.86053ms to provisionDockerMachine
	I0828 17:59:29.026019   51925 start.go:293] postStartSetup for "test-preload-781179" (driver="kvm2")
	I0828 17:59:29.026030   51925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 17:59:29.026047   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	I0828 17:59:29.026388   51925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 17:59:29.026431   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:29.028942   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:29.029247   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:29.029277   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:29.029381   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHPort
	I0828 17:59:29.029581   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:29.029781   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHUsername
	I0828 17:59:29.029917   51925 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/test-preload-781179/id_rsa Username:docker}
	I0828 17:59:29.112606   51925 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 17:59:29.116589   51925 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 17:59:29.116614   51925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 17:59:29.116696   51925 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 17:59:29.116809   51925 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 17:59:29.116926   51925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 17:59:29.125738   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:59:29.147633   51925 start.go:296] duration metric: took 121.601081ms for postStartSetup
	I0828 17:59:29.147670   51925 fix.go:56] duration metric: took 19.816783056s for fixHost
	I0828 17:59:29.147691   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:29.150311   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:29.150639   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:29.150671   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:29.150816   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHPort
	I0828 17:59:29.151001   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:29.151140   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:29.151252   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHUsername
	I0828 17:59:29.151394   51925 main.go:141] libmachine: Using SSH client type: native
	I0828 17:59:29.151551   51925 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.175 22 <nil> <nil>}
	I0828 17:59:29.151560   51925 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 17:59:29.254449   51925 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724867969.232127279
	
	I0828 17:59:29.254489   51925 fix.go:216] guest clock: 1724867969.232127279
	I0828 17:59:29.254497   51925 fix.go:229] Guest: 2024-08-28 17:59:29.232127279 +0000 UTC Remote: 2024-08-28 17:59:29.1476744 +0000 UTC m=+32.539737184 (delta=84.452879ms)
	I0828 17:59:29.254515   51925 fix.go:200] guest clock delta is within tolerance: 84.452879ms
	I0828 17:59:29.254520   51925 start.go:83] releasing machines lock for "test-preload-781179", held for 19.923647109s
	I0828 17:59:29.254535   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	I0828 17:59:29.254855   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetIP
	I0828 17:59:29.257565   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:29.257902   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:29.257919   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:29.258064   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	I0828 17:59:29.258611   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	I0828 17:59:29.258827   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	I0828 17:59:29.258929   51925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 17:59:29.258978   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:29.259024   51925 ssh_runner.go:195] Run: cat /version.json
	I0828 17:59:29.259045   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:29.261699   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:29.261721   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:29.262033   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:29.262089   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:29.262110   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:29.262242   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHPort
	I0828 17:59:29.262339   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:29.262397   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:29.262586   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHPort
	I0828 17:59:29.262601   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHUsername
	I0828 17:59:29.262848   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:29.262843   51925 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/test-preload-781179/id_rsa Username:docker}
	I0828 17:59:29.263039   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHUsername
	I0828 17:59:29.263180   51925 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/test-preload-781179/id_rsa Username:docker}
	I0828 17:59:29.370351   51925 ssh_runner.go:195] Run: systemctl --version
	I0828 17:59:29.376033   51925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 17:59:29.511892   51925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 17:59:29.517626   51925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 17:59:29.517710   51925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 17:59:29.532995   51925 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 17:59:29.533024   51925 start.go:495] detecting cgroup driver to use...
	I0828 17:59:29.533091   51925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 17:59:29.550135   51925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 17:59:29.564059   51925 docker.go:217] disabling cri-docker service (if available) ...
	I0828 17:59:29.564112   51925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 17:59:29.578136   51925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 17:59:29.591338   51925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 17:59:29.698402   51925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 17:59:29.823580   51925 docker.go:233] disabling docker service ...
	I0828 17:59:29.823666   51925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 17:59:29.837443   51925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 17:59:29.849539   51925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 17:59:29.986441   51925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 17:59:30.109524   51925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 17:59:30.122525   51925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 17:59:30.140202   51925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0828 17:59:30.140265   51925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:59:30.150300   51925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 17:59:30.150374   51925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:59:30.165479   51925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:59:30.175625   51925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:59:30.185101   51925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 17:59:30.195048   51925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:59:30.204926   51925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:59:30.220667   51925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 17:59:30.230334   51925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 17:59:30.239230   51925 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 17:59:30.239293   51925 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 17:59:30.251660   51925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 17:59:30.260562   51925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:59:30.381097   51925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 17:59:30.465863   51925 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 17:59:30.465937   51925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 17:59:30.470102   51925 start.go:563] Will wait 60s for crictl version
	I0828 17:59:30.470159   51925 ssh_runner.go:195] Run: which crictl
	I0828 17:59:30.473376   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 17:59:30.509996   51925 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 17:59:30.510062   51925 ssh_runner.go:195] Run: crio --version
	I0828 17:59:30.536609   51925 ssh_runner.go:195] Run: crio --version
	I0828 17:59:30.564197   51925 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0828 17:59:30.565412   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetIP
	I0828 17:59:30.567879   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:30.568217   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:30.568250   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:30.568421   51925 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 17:59:30.572213   51925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:59:30.583828   51925 kubeadm.go:883] updating cluster {Name:test-preload-781179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-781179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 17:59:30.583930   51925 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0828 17:59:30.583970   51925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:59:30.619393   51925 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0828 17:59:30.619453   51925 ssh_runner.go:195] Run: which lz4
	I0828 17:59:30.623093   51925 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 17:59:30.626933   51925 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 17:59:30.626961   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0828 17:59:31.963241   51925 crio.go:462] duration metric: took 1.340180391s to copy over tarball
	I0828 17:59:31.963331   51925 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 17:59:34.318179   51925 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.354821731s)
	I0828 17:59:34.318206   51925 crio.go:469] duration metric: took 2.354941105s to extract the tarball
	I0828 17:59:34.318212   51925 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 17:59:34.358124   51925 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 17:59:34.397253   51925 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0828 17:59:34.397273   51925 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 17:59:34.397378   51925 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0828 17:59:34.397410   51925 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0828 17:59:34.397333   51925 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 17:59:34.397351   51925 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0828 17:59:34.397442   51925 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 17:59:34.397466   51925 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0828 17:59:34.397477   51925 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0828 17:59:34.397463   51925 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0828 17:59:34.398768   51925 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0828 17:59:34.398791   51925 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0828 17:59:34.398791   51925 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0828 17:59:34.398797   51925 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0828 17:59:34.398776   51925 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0828 17:59:34.398828   51925 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 17:59:34.398854   51925 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0828 17:59:34.398919   51925 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 17:59:34.625220   51925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0828 17:59:34.660195   51925 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0828 17:59:34.660227   51925 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0828 17:59:34.660267   51925 ssh_runner.go:195] Run: which crictl
	I0828 17:59:34.663766   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0828 17:59:34.676348   51925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0828 17:59:34.696156   51925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0828 17:59:34.703072   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0828 17:59:34.715779   51925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0828 17:59:34.730659   51925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0828 17:59:34.730662   51925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0828 17:59:34.735407   51925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0828 17:59:34.774067   51925 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0828 17:59:34.774127   51925 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0828 17:59:34.774179   51925 ssh_runner.go:195] Run: which crictl
	I0828 17:59:34.784847   51925 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0828 17:59:34.784889   51925 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0828 17:59:34.784934   51925 ssh_runner.go:195] Run: which crictl
	I0828 17:59:34.812685   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0828 17:59:34.865136   51925 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0828 17:59:34.865179   51925 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0828 17:59:34.865227   51925 ssh_runner.go:195] Run: which crictl
	I0828 17:59:34.875016   51925 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0828 17:59:34.875048   51925 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0828 17:59:34.875095   51925 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0828 17:59:34.875127   51925 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0828 17:59:34.875104   51925 ssh_runner.go:195] Run: which crictl
	I0828 17:59:34.875231   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0828 17:59:34.875161   51925 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0828 17:59:34.875269   51925 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0828 17:59:34.875276   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0828 17:59:34.875170   51925 ssh_runner.go:195] Run: which crictl
	I0828 17:59:34.875306   51925 ssh_runner.go:195] Run: which crictl
	I0828 17:59:34.901477   51925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0828 17:59:34.901572   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0828 17:59:34.901590   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0828 17:59:34.901592   51925 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0828 17:59:34.951975   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0828 17:59:34.952077   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0828 17:59:34.952084   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0828 17:59:34.959353   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0828 17:59:34.986821   51925 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0828 17:59:34.986840   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0828 17:59:34.986842   51925 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0828 17:59:34.986909   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0828 17:59:34.986939   51925 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0828 17:59:35.081302   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0828 17:59:35.089678   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0828 17:59:35.094812   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0828 17:59:35.096277   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0828 17:59:35.147950   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0828 17:59:35.628565   51925 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 17:59:38.288923   51925 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (3.301963056s)
	I0828 17:59:38.288960   51925 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0828 17:59:38.288970   51925 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (3.302014295s)
	I0828 17:59:38.289029   51925 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (3.207701203s)
	I0828 17:59:38.289039   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0828 17:59:38.289062   51925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0828 17:59:38.289096   51925 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (3.199389587s)
	I0828 17:59:38.289135   51925 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0828 17:59:38.289153   51925 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (3.194319158s)
	I0828 17:59:38.289158   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0828 17:59:38.289174   51925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0828 17:59:38.289236   51925 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0828 17:59:38.289244   51925 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (3.192942313s)
	I0828 17:59:38.289287   51925 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (3.141312348s)
	I0828 17:59:38.289317   51925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0828 17:59:38.289320   51925 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.660727314s)
	I0828 17:59:38.289293   51925 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0828 17:59:38.289393   51925 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0828 17:59:38.359176   51925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0828 17:59:38.359236   51925 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0828 17:59:38.359247   51925 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0828 17:59:38.359284   51925 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0828 17:59:38.359317   51925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0828 17:59:38.359345   51925 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0828 17:59:38.359285   51925 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0828 17:59:38.359406   51925 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0828 17:59:38.359418   51925 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0828 17:59:38.359454   51925 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0828 17:59:38.359492   51925 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0828 17:59:38.371032   51925 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0828 17:59:38.371147   51925 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0828 17:59:38.507626   51925 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0828 17:59:38.507676   51925 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0828 17:59:38.507710   51925 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0828 17:59:38.507768   51925 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0828 17:59:38.848265   51925 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0828 17:59:38.848318   51925 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0828 17:59:38.848399   51925 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0828 17:59:40.896214   51925 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.047792539s)
	I0828 17:59:40.896240   51925 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0828 17:59:40.896270   51925 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0828 17:59:40.896311   51925 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0828 17:59:41.640155   51925 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0828 17:59:41.640196   51925 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0828 17:59:41.640253   51925 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0828 17:59:42.083119   51925 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0828 17:59:42.083159   51925 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0828 17:59:42.083211   51925 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0828 17:59:42.723079   51925 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0828 17:59:42.723127   51925 cache_images.go:123] Successfully loaded all cached images
	I0828 17:59:42.723132   51925 cache_images.go:92] duration metric: took 8.325849071s to LoadCachedImages
	I0828 17:59:42.723144   51925 kubeadm.go:934] updating node { 192.168.39.175 8443 v1.24.4 crio true true} ...
	I0828 17:59:42.723246   51925 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-781179 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-781179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 17:59:42.723351   51925 ssh_runner.go:195] Run: crio config
	I0828 17:59:42.775709   51925 cni.go:84] Creating CNI manager for ""
	I0828 17:59:42.775735   51925 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 17:59:42.775749   51925 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 17:59:42.775767   51925 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.175 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-781179 NodeName:test-preload-781179 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 17:59:42.775892   51925 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-781179"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 17:59:42.775949   51925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0828 17:59:42.785683   51925 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 17:59:42.785764   51925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 17:59:42.794977   51925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0828 17:59:42.810497   51925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 17:59:42.825876   51925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0828 17:59:42.841635   51925 ssh_runner.go:195] Run: grep 192.168.39.175	control-plane.minikube.internal$ /etc/hosts
	I0828 17:59:42.845223   51925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.175	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 17:59:42.857375   51925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:59:42.971081   51925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:59:42.997595   51925 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179 for IP: 192.168.39.175
	I0828 17:59:42.997615   51925 certs.go:194] generating shared ca certs ...
	I0828 17:59:42.997632   51925 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:59:42.997787   51925 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 17:59:42.997858   51925 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 17:59:42.997875   51925 certs.go:256] generating profile certs ...
	I0828 17:59:42.997975   51925 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/client.key
	I0828 17:59:42.998103   51925 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/apiserver.key.a21b5cb2
	I0828 17:59:42.998187   51925 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/proxy-client.key
	I0828 17:59:42.998317   51925 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 17:59:42.998351   51925 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 17:59:42.998359   51925 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 17:59:42.998391   51925 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 17:59:42.998434   51925 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 17:59:42.998462   51925 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 17:59:42.998501   51925 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 17:59:42.999395   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 17:59:43.042550   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 17:59:43.078397   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 17:59:43.101179   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 17:59:43.126294   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0828 17:59:43.170065   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 17:59:43.203308   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 17:59:43.229040   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 17:59:43.251512   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 17:59:43.272925   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 17:59:43.294374   51925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 17:59:43.315815   51925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 17:59:43.331370   51925 ssh_runner.go:195] Run: openssl version
	I0828 17:59:43.336708   51925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 17:59:43.347021   51925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:59:43.350961   51925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:59:43.351007   51925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 17:59:43.356509   51925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 17:59:43.366536   51925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 17:59:43.376674   51925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 17:59:43.380752   51925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 17:59:43.380811   51925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 17:59:43.386256   51925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 17:59:43.396853   51925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 17:59:43.407836   51925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 17:59:43.412174   51925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 17:59:43.412234   51925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 17:59:43.417533   51925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 17:59:43.427619   51925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 17:59:43.431698   51925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 17:59:43.437268   51925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 17:59:43.442668   51925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 17:59:43.448035   51925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 17:59:43.453300   51925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 17:59:43.458446   51925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 17:59:43.463719   51925 kubeadm.go:392] StartCluster: {Name:test-preload-781179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-781179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:59:43.463808   51925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 17:59:43.463843   51925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 17:59:43.499407   51925 cri.go:89] found id: ""
	I0828 17:59:43.499478   51925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 17:59:43.508901   51925 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 17:59:43.508919   51925 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 17:59:43.508972   51925 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 17:59:43.517816   51925 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:59:43.518348   51925 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-781179" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:59:43.518501   51925 kubeconfig.go:62] /home/jenkins/minikube-integration/19529-10317/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-781179" cluster setting kubeconfig missing "test-preload-781179" context setting]
	I0828 17:59:43.518878   51925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:59:43.519682   51925 kapi.go:59] client config for test-preload-781179: &rest.Config{Host:"https://192.168.39.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/client.crt", KeyFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/client.key", CAFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 17:59:43.520430   51925 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 17:59:43.529308   51925 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.175
	I0828 17:59:43.529339   51925 kubeadm.go:1160] stopping kube-system containers ...
	I0828 17:59:43.529349   51925 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 17:59:43.529413   51925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 17:59:43.563560   51925 cri.go:89] found id: ""
	I0828 17:59:43.563634   51925 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 17:59:43.579538   51925 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 17:59:43.588783   51925 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 17:59:43.588808   51925 kubeadm.go:157] found existing configuration files:
	
	I0828 17:59:43.588856   51925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 17:59:43.597662   51925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 17:59:43.597731   51925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 17:59:43.607081   51925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 17:59:43.615722   51925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 17:59:43.615786   51925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 17:59:43.624977   51925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 17:59:43.633748   51925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 17:59:43.633809   51925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 17:59:43.642777   51925 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 17:59:43.651587   51925 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 17:59:43.651646   51925 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 17:59:43.660617   51925 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 17:59:43.669410   51925 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 17:59:43.753694   51925 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 17:59:44.329853   51925 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 17:59:44.589170   51925 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 17:59:44.648179   51925 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 17:59:44.719206   51925 api_server.go:52] waiting for apiserver process to appear ...
	I0828 17:59:44.719284   51925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:59:45.220401   51925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:59:45.719459   51925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:59:45.736082   51925 api_server.go:72] duration metric: took 1.016881839s to wait for apiserver process to appear ...
	I0828 17:59:45.736111   51925 api_server.go:88] waiting for apiserver healthz status ...
	I0828 17:59:45.736133   51925 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I0828 17:59:45.736630   51925 api_server.go:269] stopped: https://192.168.39.175:8443/healthz: Get "https://192.168.39.175:8443/healthz": dial tcp 192.168.39.175:8443: connect: connection refused
	I0828 17:59:46.236195   51925 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I0828 17:59:49.657844   51925 api_server.go:279] https://192.168.39.175:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 17:59:49.657877   51925 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 17:59:49.657894   51925 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I0828 17:59:49.748421   51925 api_server.go:279] https://192.168.39.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 17:59:49.748453   51925 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 17:59:49.748472   51925 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I0828 17:59:49.777111   51925 api_server.go:279] https://192.168.39.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 17:59:49.777152   51925 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 17:59:50.236670   51925 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I0828 17:59:50.242071   51925 api_server.go:279] https://192.168.39.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 17:59:50.242118   51925 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 17:59:50.736619   51925 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I0828 17:59:50.742416   51925 api_server.go:279] https://192.168.39.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 17:59:50.742442   51925 api_server.go:103] status: https://192.168.39.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 17:59:51.237105   51925 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I0828 17:59:51.243389   51925 api_server.go:279] https://192.168.39.175:8443/healthz returned 200:
	ok
	I0828 17:59:51.250784   51925 api_server.go:141] control plane version: v1.24.4
	I0828 17:59:51.250808   51925 api_server.go:131] duration metric: took 5.514690444s to wait for apiserver health ...
	I0828 17:59:51.250816   51925 cni.go:84] Creating CNI manager for ""
	I0828 17:59:51.250822   51925 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 17:59:51.252539   51925 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 17:59:51.253616   51925 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 17:59:51.264240   51925 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 17:59:51.289083   51925 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 17:59:51.289195   51925 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0828 17:59:51.289219   51925 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0828 17:59:51.306327   51925 system_pods.go:59] 7 kube-system pods found
	I0828 17:59:51.306358   51925 system_pods.go:61] "coredns-6d4b75cb6d-tqdcl" [a003f5ee-20b1-449e-ae65-9f9e42b49168] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 17:59:51.306364   51925 system_pods.go:61] "etcd-test-preload-781179" [36ee0430-c42a-4480-b1d8-f3d9b7a9b8ff] Running
	I0828 17:59:51.306369   51925 system_pods.go:61] "kube-apiserver-test-preload-781179" [f20b5a69-57af-41d0-bd3c-2fbb9f33ee57] Running
	I0828 17:59:51.306382   51925 system_pods.go:61] "kube-controller-manager-test-preload-781179" [f767c8b8-7796-46b1-9aeb-8a53b7e253b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 17:59:51.306387   51925 system_pods.go:61] "kube-proxy-sc8vl" [b58feff6-0c7d-4087-842c-353320bb2fe3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 17:59:51.306392   51925 system_pods.go:61] "kube-scheduler-test-preload-781179" [effb56b8-b673-446e-a771-ce69f51f0496] Running
	I0828 17:59:51.306400   51925 system_pods.go:61] "storage-provisioner" [02ec3bc1-bf5e-4ec3-a87d-215150f9cea1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 17:59:51.306406   51925 system_pods.go:74] duration metric: took 17.300454ms to wait for pod list to return data ...
	I0828 17:59:51.306417   51925 node_conditions.go:102] verifying NodePressure condition ...
	I0828 17:59:51.311077   51925 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 17:59:51.311102   51925 node_conditions.go:123] node cpu capacity is 2
	I0828 17:59:51.311111   51925 node_conditions.go:105] duration metric: took 4.689731ms to run NodePressure ...
	I0828 17:59:51.311127   51925 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 17:59:51.501708   51925 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 17:59:51.507335   51925 kubeadm.go:739] kubelet initialised
	I0828 17:59:51.507353   51925 kubeadm.go:740] duration metric: took 5.622698ms waiting for restarted kubelet to initialise ...
	I0828 17:59:51.507361   51925 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:59:51.512046   51925 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-tqdcl" in "kube-system" namespace to be "Ready" ...
	I0828 17:59:51.516769   51925 pod_ready.go:98] node "test-preload-781179" hosting pod "coredns-6d4b75cb6d-tqdcl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:51.516791   51925 pod_ready.go:82] duration metric: took 4.725461ms for pod "coredns-6d4b75cb6d-tqdcl" in "kube-system" namespace to be "Ready" ...
	E0828 17:59:51.516800   51925 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-781179" hosting pod "coredns-6d4b75cb6d-tqdcl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:51.516807   51925 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 17:59:51.521548   51925 pod_ready.go:98] node "test-preload-781179" hosting pod "etcd-test-preload-781179" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:51.521574   51925 pod_ready.go:82] duration metric: took 4.760499ms for pod "etcd-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	E0828 17:59:51.521584   51925 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-781179" hosting pod "etcd-test-preload-781179" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:51.521589   51925 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 17:59:51.525950   51925 pod_ready.go:98] node "test-preload-781179" hosting pod "kube-apiserver-test-preload-781179" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:51.525968   51925 pod_ready.go:82] duration metric: took 4.369997ms for pod "kube-apiserver-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	E0828 17:59:51.525976   51925 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-781179" hosting pod "kube-apiserver-test-preload-781179" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:51.525985   51925 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 17:59:51.692928   51925 pod_ready.go:98] node "test-preload-781179" hosting pod "kube-controller-manager-test-preload-781179" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:51.692956   51925 pod_ready.go:82] duration metric: took 166.962474ms for pod "kube-controller-manager-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	E0828 17:59:51.692966   51925 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-781179" hosting pod "kube-controller-manager-test-preload-781179" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:51.692972   51925 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sc8vl" in "kube-system" namespace to be "Ready" ...
	I0828 17:59:52.093129   51925 pod_ready.go:98] node "test-preload-781179" hosting pod "kube-proxy-sc8vl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:52.093154   51925 pod_ready.go:82] duration metric: took 400.172039ms for pod "kube-proxy-sc8vl" in "kube-system" namespace to be "Ready" ...
	E0828 17:59:52.093166   51925 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-781179" hosting pod "kube-proxy-sc8vl" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:52.093173   51925 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 17:59:52.493436   51925 pod_ready.go:98] node "test-preload-781179" hosting pod "kube-scheduler-test-preload-781179" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:52.493464   51925 pod_ready.go:82] duration metric: took 400.282124ms for pod "kube-scheduler-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	E0828 17:59:52.493477   51925 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-781179" hosting pod "kube-scheduler-test-preload-781179" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:52.493490   51925 pod_ready.go:39] duration metric: took 986.12164ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 17:59:52.493518   51925 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 17:59:52.505165   51925 ops.go:34] apiserver oom_adj: -16
	I0828 17:59:52.505185   51925 kubeadm.go:597] duration metric: took 8.99626065s to restartPrimaryControlPlane
	I0828 17:59:52.505194   51925 kubeadm.go:394] duration metric: took 9.041482706s to StartCluster
	I0828 17:59:52.505209   51925 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:59:52.505284   51925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:59:52.505882   51925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 17:59:52.506162   51925 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.175 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 17:59:52.506278   51925 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 17:59:52.506331   51925 addons.go:69] Setting storage-provisioner=true in profile "test-preload-781179"
	I0828 17:59:52.506350   51925 addons.go:69] Setting default-storageclass=true in profile "test-preload-781179"
	I0828 17:59:52.506399   51925 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-781179"
	I0828 17:59:52.506433   51925 config.go:182] Loaded profile config "test-preload-781179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0828 17:59:52.506357   51925 addons.go:234] Setting addon storage-provisioner=true in "test-preload-781179"
	W0828 17:59:52.506471   51925 addons.go:243] addon storage-provisioner should already be in state true
	I0828 17:59:52.506491   51925 host.go:66] Checking if "test-preload-781179" exists ...
	I0828 17:59:52.506744   51925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:59:52.506786   51925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:59:52.506812   51925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:59:52.506843   51925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:59:52.507806   51925 out.go:177] * Verifying Kubernetes components...
	I0828 17:59:52.509000   51925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 17:59:52.522117   51925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
	I0828 17:59:52.522141   51925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39075
	I0828 17:59:52.522598   51925 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:59:52.522650   51925 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:59:52.523082   51925 main.go:141] libmachine: Using API Version  1
	I0828 17:59:52.523098   51925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:59:52.523196   51925 main.go:141] libmachine: Using API Version  1
	I0828 17:59:52.523219   51925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:59:52.523426   51925 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:59:52.523552   51925 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:59:52.523594   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetState
	I0828 17:59:52.524016   51925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:59:52.524055   51925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:59:52.525974   51925 kapi.go:59] client config for test-preload-781179: &rest.Config{Host:"https://192.168.39.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/client.crt", KeyFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/test-preload-781179/client.key", CAFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 17:59:52.526333   51925 addons.go:234] Setting addon default-storageclass=true in "test-preload-781179"
	W0828 17:59:52.526355   51925 addons.go:243] addon default-storageclass should already be in state true
	I0828 17:59:52.526380   51925 host.go:66] Checking if "test-preload-781179" exists ...
	I0828 17:59:52.526773   51925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:59:52.526829   51925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:59:52.538901   51925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45181
	I0828 17:59:52.539343   51925 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:59:52.539876   51925 main.go:141] libmachine: Using API Version  1
	I0828 17:59:52.539896   51925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:59:52.540195   51925 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:59:52.540381   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetState
	I0828 17:59:52.541111   51925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39611
	I0828 17:59:52.541525   51925 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:59:52.542038   51925 main.go:141] libmachine: Using API Version  1
	I0828 17:59:52.542061   51925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:59:52.542165   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	I0828 17:59:52.542383   51925 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:59:52.542935   51925 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:59:52.542982   51925 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:59:52.543979   51925 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 17:59:52.545427   51925 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 17:59:52.545440   51925 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 17:59:52.545454   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:52.548508   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:52.548917   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:52.548950   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:52.549086   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHPort
	I0828 17:59:52.549283   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:52.549463   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHUsername
	I0828 17:59:52.549646   51925 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/test-preload-781179/id_rsa Username:docker}
	I0828 17:59:52.558346   51925 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43677
	I0828 17:59:52.558760   51925 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:59:52.559181   51925 main.go:141] libmachine: Using API Version  1
	I0828 17:59:52.559205   51925 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:59:52.559509   51925 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:59:52.559682   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetState
	I0828 17:59:52.561181   51925 main.go:141] libmachine: (test-preload-781179) Calling .DriverName
	I0828 17:59:52.561408   51925 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 17:59:52.561424   51925 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 17:59:52.561444   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHHostname
	I0828 17:59:52.564301   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:52.564780   51925 main.go:141] libmachine: (test-preload-781179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:16:fe", ip: ""} in network mk-test-preload-781179: {Iface:virbr1 ExpiryTime:2024-08-28 18:59:19 +0000 UTC Type:0 Mac:52:54:00:2f:16:fe Iaid: IPaddr:192.168.39.175 Prefix:24 Hostname:test-preload-781179 Clientid:01:52:54:00:2f:16:fe}
	I0828 17:59:52.564806   51925 main.go:141] libmachine: (test-preload-781179) DBG | domain test-preload-781179 has defined IP address 192.168.39.175 and MAC address 52:54:00:2f:16:fe in network mk-test-preload-781179
	I0828 17:59:52.564978   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHPort
	I0828 17:59:52.565164   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHKeyPath
	I0828 17:59:52.565292   51925 main.go:141] libmachine: (test-preload-781179) Calling .GetSSHUsername
	I0828 17:59:52.565407   51925 sshutil.go:53] new ssh client: &{IP:192.168.39.175 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/test-preload-781179/id_rsa Username:docker}
	I0828 17:59:52.683964   51925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 17:59:52.699701   51925 node_ready.go:35] waiting up to 6m0s for node "test-preload-781179" to be "Ready" ...
	I0828 17:59:52.775970   51925 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 17:59:52.793028   51925 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 17:59:53.734495   51925 main.go:141] libmachine: Making call to close driver server
	I0828 17:59:53.734519   51925 main.go:141] libmachine: (test-preload-781179) Calling .Close
	I0828 17:59:53.734500   51925 main.go:141] libmachine: Making call to close driver server
	I0828 17:59:53.734594   51925 main.go:141] libmachine: (test-preload-781179) Calling .Close
	I0828 17:59:53.734761   51925 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:59:53.734776   51925 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:59:53.734785   51925 main.go:141] libmachine: Making call to close driver server
	I0828 17:59:53.734794   51925 main.go:141] libmachine: (test-preload-781179) Calling .Close
	I0828 17:59:53.734830   51925 main.go:141] libmachine: (test-preload-781179) DBG | Closing plugin on server side
	I0828 17:59:53.734852   51925 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:59:53.734862   51925 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:59:53.734877   51925 main.go:141] libmachine: Making call to close driver server
	I0828 17:59:53.734889   51925 main.go:141] libmachine: (test-preload-781179) Calling .Close
	I0828 17:59:53.734991   51925 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:59:53.735006   51925 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:59:53.735114   51925 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:59:53.735125   51925 main.go:141] libmachine: (test-preload-781179) DBG | Closing plugin on server side
	I0828 17:59:53.735128   51925 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:59:53.740715   51925 main.go:141] libmachine: Making call to close driver server
	I0828 17:59:53.740731   51925 main.go:141] libmachine: (test-preload-781179) Calling .Close
	I0828 17:59:53.740932   51925 main.go:141] libmachine: Successfully made call to close driver server
	I0828 17:59:53.740948   51925 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 17:59:53.740990   51925 main.go:141] libmachine: (test-preload-781179) DBG | Closing plugin on server side
	I0828 17:59:53.742661   51925 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0828 17:59:53.743742   51925 addons.go:510] duration metric: took 1.237475424s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0828 17:59:54.703582   51925 node_ready.go:53] node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:57.203705   51925 node_ready.go:53] node "test-preload-781179" has status "Ready":"False"
	I0828 17:59:59.204283   51925 node_ready.go:53] node "test-preload-781179" has status "Ready":"False"
	I0828 18:00:00.203282   51925 node_ready.go:49] node "test-preload-781179" has status "Ready":"True"
	I0828 18:00:00.203307   51925 node_ready.go:38] duration metric: took 7.503570204s for node "test-preload-781179" to be "Ready" ...
	I0828 18:00:00.203316   51925 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:00:00.208774   51925 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-tqdcl" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:00.213939   51925 pod_ready.go:93] pod "coredns-6d4b75cb6d-tqdcl" in "kube-system" namespace has status "Ready":"True"
	I0828 18:00:00.213962   51925 pod_ready.go:82] duration metric: took 5.162795ms for pod "coredns-6d4b75cb6d-tqdcl" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:00.213970   51925 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:02.220306   51925 pod_ready.go:103] pod "etcd-test-preload-781179" in "kube-system" namespace has status "Ready":"False"
	I0828 18:00:03.219818   51925 pod_ready.go:93] pod "etcd-test-preload-781179" in "kube-system" namespace has status "Ready":"True"
	I0828 18:00:03.219842   51925 pod_ready.go:82] duration metric: took 3.005864112s for pod "etcd-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:03.219853   51925 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:03.224670   51925 pod_ready.go:93] pod "kube-apiserver-test-preload-781179" in "kube-system" namespace has status "Ready":"True"
	I0828 18:00:03.224690   51925 pod_ready.go:82] duration metric: took 4.828833ms for pod "kube-apiserver-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:03.224701   51925 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:03.228330   51925 pod_ready.go:93] pod "kube-controller-manager-test-preload-781179" in "kube-system" namespace has status "Ready":"True"
	I0828 18:00:03.228350   51925 pod_ready.go:82] duration metric: took 3.642904ms for pod "kube-controller-manager-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:03.228358   51925 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sc8vl" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:03.232012   51925 pod_ready.go:93] pod "kube-proxy-sc8vl" in "kube-system" namespace has status "Ready":"True"
	I0828 18:00:03.232027   51925 pod_ready.go:82] duration metric: took 3.663036ms for pod "kube-proxy-sc8vl" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:03.232034   51925 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:03.803782   51925 pod_ready.go:93] pod "kube-scheduler-test-preload-781179" in "kube-system" namespace has status "Ready":"True"
	I0828 18:00:03.803804   51925 pod_ready.go:82] duration metric: took 571.763221ms for pod "kube-scheduler-test-preload-781179" in "kube-system" namespace to be "Ready" ...
	I0828 18:00:03.803814   51925 pod_ready.go:39] duration metric: took 3.600489426s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:00:03.803826   51925 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:00:03.803875   51925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:00:03.818893   51925 api_server.go:72] duration metric: took 11.312696849s to wait for apiserver process to appear ...
	I0828 18:00:03.818920   51925 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:00:03.818946   51925 api_server.go:253] Checking apiserver healthz at https://192.168.39.175:8443/healthz ...
	I0828 18:00:03.824010   51925 api_server.go:279] https://192.168.39.175:8443/healthz returned 200:
	ok
	I0828 18:00:03.825011   51925 api_server.go:141] control plane version: v1.24.4
	I0828 18:00:03.825030   51925 api_server.go:131] duration metric: took 6.104811ms to wait for apiserver health ...
	I0828 18:00:03.825037   51925 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:00:04.006144   51925 system_pods.go:59] 7 kube-system pods found
	I0828 18:00:04.006171   51925 system_pods.go:61] "coredns-6d4b75cb6d-tqdcl" [a003f5ee-20b1-449e-ae65-9f9e42b49168] Running
	I0828 18:00:04.006179   51925 system_pods.go:61] "etcd-test-preload-781179" [36ee0430-c42a-4480-b1d8-f3d9b7a9b8ff] Running
	I0828 18:00:04.006182   51925 system_pods.go:61] "kube-apiserver-test-preload-781179" [f20b5a69-57af-41d0-bd3c-2fbb9f33ee57] Running
	I0828 18:00:04.006186   51925 system_pods.go:61] "kube-controller-manager-test-preload-781179" [f767c8b8-7796-46b1-9aeb-8a53b7e253b8] Running
	I0828 18:00:04.006189   51925 system_pods.go:61] "kube-proxy-sc8vl" [b58feff6-0c7d-4087-842c-353320bb2fe3] Running
	I0828 18:00:04.006192   51925 system_pods.go:61] "kube-scheduler-test-preload-781179" [effb56b8-b673-446e-a771-ce69f51f0496] Running
	I0828 18:00:04.006195   51925 system_pods.go:61] "storage-provisioner" [02ec3bc1-bf5e-4ec3-a87d-215150f9cea1] Running
	I0828 18:00:04.006201   51925 system_pods.go:74] duration metric: took 181.159041ms to wait for pod list to return data ...
	I0828 18:00:04.006212   51925 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:00:04.203422   51925 default_sa.go:45] found service account: "default"
	I0828 18:00:04.203446   51925 default_sa.go:55] duration metric: took 197.227861ms for default service account to be created ...
	I0828 18:00:04.203453   51925 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:00:04.406412   51925 system_pods.go:86] 7 kube-system pods found
	I0828 18:00:04.406438   51925 system_pods.go:89] "coredns-6d4b75cb6d-tqdcl" [a003f5ee-20b1-449e-ae65-9f9e42b49168] Running
	I0828 18:00:04.406443   51925 system_pods.go:89] "etcd-test-preload-781179" [36ee0430-c42a-4480-b1d8-f3d9b7a9b8ff] Running
	I0828 18:00:04.406447   51925 system_pods.go:89] "kube-apiserver-test-preload-781179" [f20b5a69-57af-41d0-bd3c-2fbb9f33ee57] Running
	I0828 18:00:04.406452   51925 system_pods.go:89] "kube-controller-manager-test-preload-781179" [f767c8b8-7796-46b1-9aeb-8a53b7e253b8] Running
	I0828 18:00:04.406459   51925 system_pods.go:89] "kube-proxy-sc8vl" [b58feff6-0c7d-4087-842c-353320bb2fe3] Running
	I0828 18:00:04.406462   51925 system_pods.go:89] "kube-scheduler-test-preload-781179" [effb56b8-b673-446e-a771-ce69f51f0496] Running
	I0828 18:00:04.406465   51925 system_pods.go:89] "storage-provisioner" [02ec3bc1-bf5e-4ec3-a87d-215150f9cea1] Running
	I0828 18:00:04.406472   51925 system_pods.go:126] duration metric: took 203.012957ms to wait for k8s-apps to be running ...
	I0828 18:00:04.406480   51925 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:00:04.406537   51925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:00:04.420716   51925 system_svc.go:56] duration metric: took 14.227327ms WaitForService to wait for kubelet
	I0828 18:00:04.420749   51925 kubeadm.go:582] duration metric: took 11.91455619s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:00:04.420773   51925 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:00:04.603167   51925 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:00:04.603196   51925 node_conditions.go:123] node cpu capacity is 2
	I0828 18:00:04.603208   51925 node_conditions.go:105] duration metric: took 182.429771ms to run NodePressure ...
	I0828 18:00:04.603219   51925 start.go:241] waiting for startup goroutines ...
	I0828 18:00:04.603226   51925 start.go:246] waiting for cluster config update ...
	I0828 18:00:04.603235   51925 start.go:255] writing updated cluster config ...
	I0828 18:00:04.603493   51925 ssh_runner.go:195] Run: rm -f paused
	I0828 18:00:04.647767   51925 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0828 18:00:04.649724   51925 out.go:201] 
	W0828 18:00:04.651146   51925 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0828 18:00:04.652308   51925 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0828 18:00:04.653559   51925 out.go:177] * Done! kubectl is now configured to use "test-preload-781179" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.505051555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724868005505026252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9166cebb-3b1c-491c-84fc-e5eafc2ccca6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.505538292Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e58891b3-6332-4b55-ad5d-71da8748cf44 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.505617642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e58891b3-6332-4b55-ad5d-71da8748cf44 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.505841058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e86b4a92315ade2a5ceb3d3b40d337db4eefef18023998600c920ea2b98886a,PodSandboxId:3c8f3db6f138caad07c940074a35fd197cd917ac29b1ad319f731fc19c5e5519,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724867997713966632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tqdcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a003f5ee-20b1-449e-ae65-9f9e42b49168,},Annotations:map[string]string{io.kubernetes.container.hash: f0c26019,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802ab7c7f3a337481c24c0344e7e9ee8572c489c2a9152bf71af5cd84b77890f,PodSandboxId:f21bb00d9d7a2c0bf7a90e04574e49a29ffe81ba6979855393f63308bbc3fb71,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724867990448704836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8vl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b58feff6-0c7d-4087-842c-353320bb2fe3,},Annotations:map[string]string{io.kubernetes.container.hash: b497aa21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0452b1ae6d797207d904056e994f7b91ac24326b726f6dcb28249caa0caf6bd,PodSandboxId:0942e47198f83af5d1c98c66fbe5008eafcecf54eb8124ec4e79a381e74e64c0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867990440549198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02
ec3bc1-bf5e-4ec3-a87d-215150f9cea1,},Annotations:map[string]string{io.kubernetes.container.hash: 23463daf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d3f78c00e3fffdfd23a23d4822130a1d55f1a1b22275746165493ca62bd504,PodSandboxId:cf2d09dcb573ae642a8a7ae87c87a8f1abdc52f8f0809746b41e4bd1b9edd581,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724867985460151011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5da7bc5b1
8ce3e59c808fbe517559339,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6a15c3c73212783bfa05375cd4d3b876387636506cbb8fd109c9dbcc363b27,PodSandboxId:bc0112ff8e7926be0209ca0f2ef0d73479738f203e803a4b2f0d4b674a372635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724867985449216321,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 3622799ea8dd382ddc1e3e1deaa8413c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4d18314579622f660a94c76a1f12393feb0af022d1b50fbf539c4fa2c44f2d,PodSandboxId:096d75bbab61c0e12064fc0e77389e9244010d3d67850878f44e01cad8a1e2f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724867985438393790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89709dfbf301105a5e59e997ce2f8105,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8db5b44d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6f86252453bf3cfbc8c6066991555662e89ddf94fa803616923333989a40d8,PodSandboxId:516df1cc359758760126b6c1942e7faddb98b269425b7d17f57a0d94ed176bd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724867985368226169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48fd22f48b7be4b296566d2e3e86bf8,},Annotation
s:map[string]string{io.kubernetes.container.hash: dcb81663,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e58891b3-6332-4b55-ad5d-71da8748cf44 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.544125594Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6223fd36-f7eb-4240-b419-162c5f56676f name=/runtime.v1.RuntimeService/Version
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.544246813Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6223fd36-f7eb-4240-b419-162c5f56676f name=/runtime.v1.RuntimeService/Version
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.553169354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b3161a0-d40d-43f5-83ea-f9f891fb00b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.553670108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724868005553644520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b3161a0-d40d-43f5-83ea-f9f891fb00b7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.554284585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6abd7a77-7f36-42a3-b533-c194ff796cbe name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.554366821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6abd7a77-7f36-42a3-b533-c194ff796cbe name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.554583079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e86b4a92315ade2a5ceb3d3b40d337db4eefef18023998600c920ea2b98886a,PodSandboxId:3c8f3db6f138caad07c940074a35fd197cd917ac29b1ad319f731fc19c5e5519,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724867997713966632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tqdcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a003f5ee-20b1-449e-ae65-9f9e42b49168,},Annotations:map[string]string{io.kubernetes.container.hash: f0c26019,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802ab7c7f3a337481c24c0344e7e9ee8572c489c2a9152bf71af5cd84b77890f,PodSandboxId:f21bb00d9d7a2c0bf7a90e04574e49a29ffe81ba6979855393f63308bbc3fb71,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724867990448704836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8vl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b58feff6-0c7d-4087-842c-353320bb2fe3,},Annotations:map[string]string{io.kubernetes.container.hash: b497aa21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0452b1ae6d797207d904056e994f7b91ac24326b726f6dcb28249caa0caf6bd,PodSandboxId:0942e47198f83af5d1c98c66fbe5008eafcecf54eb8124ec4e79a381e74e64c0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867990440549198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02
ec3bc1-bf5e-4ec3-a87d-215150f9cea1,},Annotations:map[string]string{io.kubernetes.container.hash: 23463daf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d3f78c00e3fffdfd23a23d4822130a1d55f1a1b22275746165493ca62bd504,PodSandboxId:cf2d09dcb573ae642a8a7ae87c87a8f1abdc52f8f0809746b41e4bd1b9edd581,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724867985460151011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5da7bc5b1
8ce3e59c808fbe517559339,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6a15c3c73212783bfa05375cd4d3b876387636506cbb8fd109c9dbcc363b27,PodSandboxId:bc0112ff8e7926be0209ca0f2ef0d73479738f203e803a4b2f0d4b674a372635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724867985449216321,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 3622799ea8dd382ddc1e3e1deaa8413c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4d18314579622f660a94c76a1f12393feb0af022d1b50fbf539c4fa2c44f2d,PodSandboxId:096d75bbab61c0e12064fc0e77389e9244010d3d67850878f44e01cad8a1e2f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724867985438393790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89709dfbf301105a5e59e997ce2f8105,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8db5b44d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6f86252453bf3cfbc8c6066991555662e89ddf94fa803616923333989a40d8,PodSandboxId:516df1cc359758760126b6c1942e7faddb98b269425b7d17f57a0d94ed176bd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724867985368226169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48fd22f48b7be4b296566d2e3e86bf8,},Annotation
s:map[string]string{io.kubernetes.container.hash: dcb81663,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6abd7a77-7f36-42a3-b533-c194ff796cbe name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.594780904Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3a42a8f-a0c3-4dc0-864f-e5107ece6d4d name=/runtime.v1.RuntimeService/Version
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.595146468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3a42a8f-a0c3-4dc0-864f-e5107ece6d4d name=/runtime.v1.RuntimeService/Version
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.596776401Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ffc701cd-e295-4d16-a2e7-15221a640338 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.597399383Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724868005597375454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ffc701cd-e295-4d16-a2e7-15221a640338 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.598178294Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bd34e03-6f8c-4e0a-bd1b-4fb30a88f53a name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.598264296Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bd34e03-6f8c-4e0a-bd1b-4fb30a88f53a name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.598657869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e86b4a92315ade2a5ceb3d3b40d337db4eefef18023998600c920ea2b98886a,PodSandboxId:3c8f3db6f138caad07c940074a35fd197cd917ac29b1ad319f731fc19c5e5519,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724867997713966632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tqdcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a003f5ee-20b1-449e-ae65-9f9e42b49168,},Annotations:map[string]string{io.kubernetes.container.hash: f0c26019,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802ab7c7f3a337481c24c0344e7e9ee8572c489c2a9152bf71af5cd84b77890f,PodSandboxId:f21bb00d9d7a2c0bf7a90e04574e49a29ffe81ba6979855393f63308bbc3fb71,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724867990448704836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8vl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b58feff6-0c7d-4087-842c-353320bb2fe3,},Annotations:map[string]string{io.kubernetes.container.hash: b497aa21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0452b1ae6d797207d904056e994f7b91ac24326b726f6dcb28249caa0caf6bd,PodSandboxId:0942e47198f83af5d1c98c66fbe5008eafcecf54eb8124ec4e79a381e74e64c0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867990440549198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02
ec3bc1-bf5e-4ec3-a87d-215150f9cea1,},Annotations:map[string]string{io.kubernetes.container.hash: 23463daf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d3f78c00e3fffdfd23a23d4822130a1d55f1a1b22275746165493ca62bd504,PodSandboxId:cf2d09dcb573ae642a8a7ae87c87a8f1abdc52f8f0809746b41e4bd1b9edd581,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724867985460151011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5da7bc5b1
8ce3e59c808fbe517559339,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6a15c3c73212783bfa05375cd4d3b876387636506cbb8fd109c9dbcc363b27,PodSandboxId:bc0112ff8e7926be0209ca0f2ef0d73479738f203e803a4b2f0d4b674a372635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724867985449216321,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 3622799ea8dd382ddc1e3e1deaa8413c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4d18314579622f660a94c76a1f12393feb0af022d1b50fbf539c4fa2c44f2d,PodSandboxId:096d75bbab61c0e12064fc0e77389e9244010d3d67850878f44e01cad8a1e2f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724867985438393790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89709dfbf301105a5e59e997ce2f8105,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8db5b44d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6f86252453bf3cfbc8c6066991555662e89ddf94fa803616923333989a40d8,PodSandboxId:516df1cc359758760126b6c1942e7faddb98b269425b7d17f57a0d94ed176bd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724867985368226169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48fd22f48b7be4b296566d2e3e86bf8,},Annotation
s:map[string]string{io.kubernetes.container.hash: dcb81663,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0bd34e03-6f8c-4e0a-bd1b-4fb30a88f53a name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.633379155Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22882191-9297-49bd-ad52-f33bc97bc7be name=/runtime.v1.RuntimeService/Version
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.633484676Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22882191-9297-49bd-ad52-f33bc97bc7be name=/runtime.v1.RuntimeService/Version
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.634634082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d390392a-dfbb-455c-82e1-bcb343554db5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.635643022Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724868005635618374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d390392a-dfbb-455c-82e1-bcb343554db5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.636328532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7d2bfc1-ef22-4a1c-a87d-0d8e43601f25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.636404960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7d2bfc1-ef22-4a1c-a87d-0d8e43601f25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:00:05 test-preload-781179 crio[660]: time="2024-08-28 18:00:05.636619044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e86b4a92315ade2a5ceb3d3b40d337db4eefef18023998600c920ea2b98886a,PodSandboxId:3c8f3db6f138caad07c940074a35fd197cd917ac29b1ad319f731fc19c5e5519,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1724867997713966632,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-tqdcl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a003f5ee-20b1-449e-ae65-9f9e42b49168,},Annotations:map[string]string{io.kubernetes.container.hash: f0c26019,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:802ab7c7f3a337481c24c0344e7e9ee8572c489c2a9152bf71af5cd84b77890f,PodSandboxId:f21bb00d9d7a2c0bf7a90e04574e49a29ffe81ba6979855393f63308bbc3fb71,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1724867990448704836,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sc8vl,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: b58feff6-0c7d-4087-842c-353320bb2fe3,},Annotations:map[string]string{io.kubernetes.container.hash: b497aa21,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0452b1ae6d797207d904056e994f7b91ac24326b726f6dcb28249caa0caf6bd,PodSandboxId:0942e47198f83af5d1c98c66fbe5008eafcecf54eb8124ec4e79a381e74e64c0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724867990440549198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02
ec3bc1-bf5e-4ec3-a87d-215150f9cea1,},Annotations:map[string]string{io.kubernetes.container.hash: 23463daf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69d3f78c00e3fffdfd23a23d4822130a1d55f1a1b22275746165493ca62bd504,PodSandboxId:cf2d09dcb573ae642a8a7ae87c87a8f1abdc52f8f0809746b41e4bd1b9edd581,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1724867985460151011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5da7bc5b1
8ce3e59c808fbe517559339,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6a15c3c73212783bfa05375cd4d3b876387636506cbb8fd109c9dbcc363b27,PodSandboxId:bc0112ff8e7926be0209ca0f2ef0d73479738f203e803a4b2f0d4b674a372635,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1724867985449216321,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 3622799ea8dd382ddc1e3e1deaa8413c,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee4d18314579622f660a94c76a1f12393feb0af022d1b50fbf539c4fa2c44f2d,PodSandboxId:096d75bbab61c0e12064fc0e77389e9244010d3d67850878f44e01cad8a1e2f6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1724867985438393790,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89709dfbf301105a5e59e997ce2f8105,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8db5b44d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6f86252453bf3cfbc8c6066991555662e89ddf94fa803616923333989a40d8,PodSandboxId:516df1cc359758760126b6c1942e7faddb98b269425b7d17f57a0d94ed176bd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1724867985368226169,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-781179,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e48fd22f48b7be4b296566d2e3e86bf8,},Annotation
s:map[string]string{io.kubernetes.container.hash: dcb81663,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d7d2bfc1-ef22-4a1c-a87d-0d8e43601f25 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6e86b4a92315a       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   3c8f3db6f138c       coredns-6d4b75cb6d-tqdcl
	802ab7c7f3a33       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   f21bb00d9d7a2       kube-proxy-sc8vl
	f0452b1ae6d79       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       2                   0942e47198f83       storage-provisioner
	69d3f78c00e3f       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   cf2d09dcb573a       kube-scheduler-test-preload-781179
	bc6a15c3c7321       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   bc0112ff8e792       kube-controller-manager-test-preload-781179
	ee4d183145796       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   096d75bbab61c       etcd-test-preload-781179
	0f6f86252453b       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   516df1cc35975       kube-apiserver-test-preload-781179
	
	
	==> coredns [6e86b4a92315ade2a5ceb3d3b40d337db4eefef18023998600c920ea2b98886a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:40499 - 39164 "HINFO IN 8657570135187446942.2204454486052696691. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016884678s
	
	
	==> describe nodes <==
	Name:               test-preload-781179
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-781179
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=test-preload-781179
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T17_57_40_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 17:57:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-781179
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:59:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:59:59 +0000   Wed, 28 Aug 2024 17:57:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:59:59 +0000   Wed, 28 Aug 2024 17:57:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:59:59 +0000   Wed, 28 Aug 2024 17:57:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:59:59 +0000   Wed, 28 Aug 2024 17:59:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.175
	  Hostname:    test-preload-781179
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d42ff847a24493c8516c22582d5146e
	  System UUID:                3d42ff84-7a24-493c-8516-c22582d5146e
	  Boot ID:                    41070be0-7bc6-400a-bedd-7648a816d3ad
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-tqdcl                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     2m12s
	  kube-system                 etcd-test-preload-781179                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m25s
	  kube-system                 kube-apiserver-test-preload-781179             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-test-preload-781179    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-sc8vl                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-scheduler-test-preload-781179             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 2m10s              kube-proxy       
	  Normal  Starting                 2m25s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m25s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m25s              kubelet          Node test-preload-781179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s              kubelet          Node test-preload-781179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s              kubelet          Node test-preload-781179 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m15s              kubelet          Node test-preload-781179 status is now: NodeReady
	  Normal  RegisteredNode           2m13s              node-controller  Node test-preload-781179 event: Registered Node test-preload-781179 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-781179 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-781179 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-781179 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                 node-controller  Node test-preload-781179 event: Registered Node test-preload-781179 in Controller
	
	
	==> dmesg <==
	[Aug28 17:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051137] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036975] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.743368] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.711144] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.504440] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.027676] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.056000] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.051617] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.154131] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.142726] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.272389] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[ +12.595544] systemd-fstab-generator[980]: Ignoring "noauto" option for root device
	[  +0.057300] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.546307] systemd-fstab-generator[1110]: Ignoring "noauto" option for root device
	[  +5.398639] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.673821] systemd-fstab-generator[1735]: Ignoring "noauto" option for root device
	[  +4.959446] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [ee4d18314579622f660a94c76a1f12393feb0af022d1b50fbf539c4fa2c44f2d] <==
	{"level":"info","ts":"2024-08-28T17:59:45.778Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"99b2d3c172539956","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-28T17:59:45.782Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-28T17:59:45.782Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"99b2d3c172539956","initial-advertise-peer-urls":["https://192.168.39.175:2380"],"listen-peer-urls":["https://192.168.39.175:2380"],"advertise-client-urls":["https://192.168.39.175:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.175:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-28T17:59:45.782Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-28T17:59:45.782Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-28T17:59:45.783Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.175:2380"}
	{"level":"info","ts":"2024-08-28T17:59:45.784Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.175:2380"}
	{"level":"info","ts":"2024-08-28T17:59:45.785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 switched to configuration voters=(11075147261457701206)"}
	{"level":"info","ts":"2024-08-28T17:59:45.785Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"915d0614c2e3855c","local-member-id":"99b2d3c172539956","added-peer-id":"99b2d3c172539956","added-peer-peer-urls":["https://192.168.39.175:2380"]}
	{"level":"info","ts":"2024-08-28T17:59:45.785Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"915d0614c2e3855c","local-member-id":"99b2d3c172539956","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:59:45.785Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T17:59:47.243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-28T17:59:47.243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-28T17:59:47.243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 received MsgPreVoteResp from 99b2d3c172539956 at term 2"}
	{"level":"info","ts":"2024-08-28T17:59:47.243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 became candidate at term 3"}
	{"level":"info","ts":"2024-08-28T17:59:47.243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 received MsgVoteResp from 99b2d3c172539956 at term 3"}
	{"level":"info","ts":"2024-08-28T17:59:47.243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"99b2d3c172539956 became leader at term 3"}
	{"level":"info","ts":"2024-08-28T17:59:47.243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 99b2d3c172539956 elected leader 99b2d3c172539956 at term 3"}
	{"level":"info","ts":"2024-08-28T17:59:47.243Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"99b2d3c172539956","local-member-attributes":"{Name:test-preload-781179 ClientURLs:[https://192.168.39.175:2379]}","request-path":"/0/members/99b2d3c172539956/attributes","cluster-id":"915d0614c2e3855c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T17:59:47.244Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:59:47.246Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.175:2379"}
	{"level":"info","ts":"2024-08-28T17:59:47.246Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T17:59:47.246Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T17:59:47.246Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T17:59:47.247Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:00:05 up 0 min,  0 users,  load average: 1.38, 0.35, 0.12
	Linux test-preload-781179 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f6f86252453bf3cfbc8c6066991555662e89ddf94fa803616923333989a40d8] <==
	I0828 17:59:49.593823       1 controller.go:85] Starting OpenAPI controller
	I0828 17:59:49.593847       1 controller.go:85] Starting OpenAPI V3 controller
	I0828 17:59:49.593886       1 naming_controller.go:291] Starting NamingConditionController
	I0828 17:59:49.593985       1 establishing_controller.go:76] Starting EstablishingController
	I0828 17:59:49.594008       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0828 17:59:49.594803       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0828 17:59:49.594820       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0828 17:59:49.730207       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0828 17:59:49.739282       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0828 17:59:49.746345       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0828 17:59:49.751632       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0828 17:59:49.779096       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0828 17:59:49.793580       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0828 17:59:49.795852       1 cache.go:39] Caches are synced for autoregister controller
	I0828 17:59:49.796124       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0828 17:59:50.274202       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0828 17:59:50.592839       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0828 17:59:50.824562       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0828 17:59:51.401421       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0828 17:59:51.416013       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0828 17:59:51.451843       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0828 17:59:51.480876       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0828 17:59:51.486633       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0828 18:00:02.082610       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0828 18:00:02.129004       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [bc6a15c3c73212783bfa05375cd4d3b876387636506cbb8fd109c9dbcc363b27] <==
	I0828 18:00:02.073666       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0828 18:00:02.078033       1 shared_informer.go:262] Caches are synced for taint
	I0828 18:00:02.078178       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0828 18:00:02.078313       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-781179. Assuming now as a timestamp.
	I0828 18:00:02.078368       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0828 18:00:02.078636       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0828 18:00:02.078960       1 event.go:294] "Event occurred" object="test-preload-781179" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-781179 event: Registered Node test-preload-781179 in Controller"
	I0828 18:00:02.082878       1 shared_informer.go:262] Caches are synced for attach detach
	I0828 18:00:02.090646       1 shared_informer.go:262] Caches are synced for disruption
	I0828 18:00:02.090667       1 disruption.go:371] Sending events to api server.
	I0828 18:00:02.091540       1 shared_informer.go:262] Caches are synced for ephemeral
	I0828 18:00:02.097423       1 shared_informer.go:262] Caches are synced for persistent volume
	I0828 18:00:02.097625       1 shared_informer.go:262] Caches are synced for GC
	I0828 18:00:02.109013       1 shared_informer.go:262] Caches are synced for stateful set
	I0828 18:00:02.110872       1 shared_informer.go:262] Caches are synced for HPA
	I0828 18:00:02.118807       1 shared_informer.go:262] Caches are synced for endpoint
	I0828 18:00:02.119399       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0828 18:00:02.131971       1 shared_informer.go:262] Caches are synced for PVC protection
	I0828 18:00:02.196304       1 shared_informer.go:262] Caches are synced for resource quota
	I0828 18:00:02.236853       1 shared_informer.go:262] Caches are synced for resource quota
	I0828 18:00:02.243046       1 shared_informer.go:262] Caches are synced for crt configmap
	I0828 18:00:02.256329       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0828 18:00:02.679646       1 shared_informer.go:262] Caches are synced for garbage collector
	I0828 18:00:02.704238       1 shared_informer.go:262] Caches are synced for garbage collector
	I0828 18:00:02.704316       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [802ab7c7f3a337481c24c0344e7e9ee8572c489c2a9152bf71af5cd84b77890f] <==
	I0828 17:59:50.743237       1 node.go:163] Successfully retrieved node IP: 192.168.39.175
	I0828 17:59:50.743419       1 server_others.go:138] "Detected node IP" address="192.168.39.175"
	I0828 17:59:50.743470       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0828 17:59:50.803318       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0828 17:59:50.803346       1 server_others.go:206] "Using iptables Proxier"
	I0828 17:59:50.804019       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0828 17:59:50.804957       1 server.go:661] "Version info" version="v1.24.4"
	I0828 17:59:50.804984       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:59:50.817974       1 config.go:317] "Starting service config controller"
	I0828 17:59:50.818084       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0828 17:59:50.818111       1 config.go:226] "Starting endpoint slice config controller"
	I0828 17:59:50.818116       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0828 17:59:50.821062       1 config.go:444] "Starting node config controller"
	I0828 17:59:50.821129       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0828 17:59:50.919122       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0828 17:59:50.919182       1 shared_informer.go:262] Caches are synced for service config
	I0828 17:59:50.921608       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [69d3f78c00e3fffdfd23a23d4822130a1d55f1a1b22275746165493ca62bd504] <==
	I0828 17:59:46.638017       1 serving.go:348] Generated self-signed cert in-memory
	W0828 17:59:49.648642       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 17:59:49.649101       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 17:59:49.649242       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 17:59:49.649445       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 17:59:49.711434       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0828 17:59:49.711979       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 17:59:49.722949       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0828 17:59:49.724720       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 17:59:49.726099       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 17:59:49.724811       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0828 17:59:49.826472       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.716422    1117 topology_manager.go:200] "Topology Admit Handler"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: E0828 17:59:49.717542    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-tqdcl" podUID=a003f5ee-20b1-449e-ae65-9f9e42b49168
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: E0828 17:59:49.764470    1117 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.765050    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b58feff6-0c7d-4087-842c-353320bb2fe3-lib-modules\") pod \"kube-proxy-sc8vl\" (UID: \"b58feff6-0c7d-4087-842c-353320bb2fe3\") " pod="kube-system/kube-proxy-sc8vl"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.765260    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a003f5ee-20b1-449e-ae65-9f9e42b49168-config-volume\") pod \"coredns-6d4b75cb6d-tqdcl\" (UID: \"a003f5ee-20b1-449e-ae65-9f9e42b49168\") " pod="kube-system/coredns-6d4b75cb6d-tqdcl"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.765362    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h58qw\" (UniqueName: \"kubernetes.io/projected/a003f5ee-20b1-449e-ae65-9f9e42b49168-kube-api-access-h58qw\") pod \"coredns-6d4b75cb6d-tqdcl\" (UID: \"a003f5ee-20b1-449e-ae65-9f9e42b49168\") " pod="kube-system/coredns-6d4b75cb6d-tqdcl"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.765526    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b58feff6-0c7d-4087-842c-353320bb2fe3-xtables-lock\") pod \"kube-proxy-sc8vl\" (UID: \"b58feff6-0c7d-4087-842c-353320bb2fe3\") " pod="kube-system/kube-proxy-sc8vl"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.765645    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54h2f\" (UniqueName: \"kubernetes.io/projected/b58feff6-0c7d-4087-842c-353320bb2fe3-kube-api-access-54h2f\") pod \"kube-proxy-sc8vl\" (UID: \"b58feff6-0c7d-4087-842c-353320bb2fe3\") " pod="kube-system/kube-proxy-sc8vl"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.765743    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hrjv\" (UniqueName: \"kubernetes.io/projected/02ec3bc1-bf5e-4ec3-a87d-215150f9cea1-kube-api-access-2hrjv\") pod \"storage-provisioner\" (UID: \"02ec3bc1-bf5e-4ec3-a87d-215150f9cea1\") " pod="kube-system/storage-provisioner"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.765880    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/02ec3bc1-bf5e-4ec3-a87d-215150f9cea1-tmp\") pod \"storage-provisioner\" (UID: \"02ec3bc1-bf5e-4ec3-a87d-215150f9cea1\") " pod="kube-system/storage-provisioner"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.765951    1117 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b58feff6-0c7d-4087-842c-353320bb2fe3-kube-proxy\") pod \"kube-proxy-sc8vl\" (UID: \"b58feff6-0c7d-4087-842c-353320bb2fe3\") " pod="kube-system/kube-proxy-sc8vl"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.765974    1117 reconciler.go:159] "Reconciler: start to sync state"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.853709    1117 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-781179"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.853811    1117 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-781179"
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: I0828 17:59:49.861074    1117 setters.go:532] "Node became not ready" node="test-preload-781179" condition={Type:Ready Status:False LastHeartbeatTime:2024-08-28 17:59:49.860990667 +0000 UTC m=+5.279642667 LastTransitionTime:2024-08-28 17:59:49.860990667 +0000 UTC m=+5.279642667 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: E0828 17:59:49.869619    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 28 17:59:49 test-preload-781179 kubelet[1117]: E0828 17:59:49.869868    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a003f5ee-20b1-449e-ae65-9f9e42b49168-config-volume podName:a003f5ee-20b1-449e-ae65-9f9e42b49168 nodeName:}" failed. No retries permitted until 2024-08-28 17:59:50.369801254 +0000 UTC m=+5.788453257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a003f5ee-20b1-449e-ae65-9f9e42b49168-config-volume") pod "coredns-6d4b75cb6d-tqdcl" (UID: "a003f5ee-20b1-449e-ae65-9f9e42b49168") : object "kube-system"/"coredns" not registered
	Aug 28 17:59:50 test-preload-781179 kubelet[1117]: E0828 17:59:50.372454    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 28 17:59:50 test-preload-781179 kubelet[1117]: E0828 17:59:50.372546    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a003f5ee-20b1-449e-ae65-9f9e42b49168-config-volume podName:a003f5ee-20b1-449e-ae65-9f9e42b49168 nodeName:}" failed. No retries permitted until 2024-08-28 17:59:51.37253075 +0000 UTC m=+6.791182750 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a003f5ee-20b1-449e-ae65-9f9e42b49168-config-volume") pod "coredns-6d4b75cb6d-tqdcl" (UID: "a003f5ee-20b1-449e-ae65-9f9e42b49168") : object "kube-system"/"coredns" not registered
	Aug 28 17:59:50 test-preload-781179 kubelet[1117]: E0828 17:59:50.806160    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-tqdcl" podUID=a003f5ee-20b1-449e-ae65-9f9e42b49168
	Aug 28 17:59:51 test-preload-781179 kubelet[1117]: E0828 17:59:51.379614    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 28 17:59:51 test-preload-781179 kubelet[1117]: E0828 17:59:51.379940    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a003f5ee-20b1-449e-ae65-9f9e42b49168-config-volume podName:a003f5ee-20b1-449e-ae65-9f9e42b49168 nodeName:}" failed. No retries permitted until 2024-08-28 17:59:53.379717351 +0000 UTC m=+8.798369351 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a003f5ee-20b1-449e-ae65-9f9e42b49168-config-volume") pod "coredns-6d4b75cb6d-tqdcl" (UID: "a003f5ee-20b1-449e-ae65-9f9e42b49168") : object "kube-system"/"coredns" not registered
	Aug 28 17:59:52 test-preload-781179 kubelet[1117]: E0828 17:59:52.808036    1117 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-tqdcl" podUID=a003f5ee-20b1-449e-ae65-9f9e42b49168
	Aug 28 17:59:53 test-preload-781179 kubelet[1117]: E0828 17:59:53.402001    1117 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 28 17:59:53 test-preload-781179 kubelet[1117]: E0828 17:59:53.402073    1117 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/a003f5ee-20b1-449e-ae65-9f9e42b49168-config-volume podName:a003f5ee-20b1-449e-ae65-9f9e42b49168 nodeName:}" failed. No retries permitted until 2024-08-28 17:59:57.402058145 +0000 UTC m=+12.820710145 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a003f5ee-20b1-449e-ae65-9f9e42b49168-config-volume") pod "coredns-6d4b75cb6d-tqdcl" (UID: "a003f5ee-20b1-449e-ae65-9f9e42b49168") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [f0452b1ae6d797207d904056e994f7b91ac24326b726f6dcb28249caa0caf6bd] <==
	I0828 17:59:50.537407       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-781179 -n test-preload-781179
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-781179 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-781179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-781179
--- FAIL: TestPreload (222.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (401.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-502283 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-502283 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m51.930488336s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-502283] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-502283" primary control-plane node in "kubernetes-upgrade-502283" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:04:59.497913   58365 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:04:59.498169   58365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:04:59.498178   58365 out.go:358] Setting ErrFile to fd 2...
	I0828 18:04:59.498183   58365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:04:59.498356   58365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:04:59.498914   58365 out.go:352] Setting JSON to false
	I0828 18:04:59.499845   58365 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6445,"bootTime":1724861854,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:04:59.499905   58365 start.go:139] virtualization: kvm guest
	I0828 18:04:59.502062   58365 out.go:177] * [kubernetes-upgrade-502283] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:04:59.503256   58365 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:04:59.503309   58365 notify.go:220] Checking for updates...
	I0828 18:04:59.505429   58365 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:04:59.506523   58365 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:04:59.507727   58365 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:04:59.509122   58365 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:04:59.510295   58365 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:04:59.512063   58365 config.go:182] Loaded profile config "NoKubernetes-682143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0828 18:04:59.512194   58365 config.go:182] Loaded profile config "cert-expiration-523070": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:04:59.512324   58365 config.go:182] Loaded profile config "running-upgrade-783149": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0828 18:04:59.512417   58365 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:04:59.548936   58365 out.go:177] * Using the kvm2 driver based on user configuration
	I0828 18:04:59.549995   58365 start.go:297] selected driver: kvm2
	I0828 18:04:59.550017   58365 start.go:901] validating driver "kvm2" against <nil>
	I0828 18:04:59.550031   58365 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:04:59.550743   58365 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:04:59.550820   58365 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:04:59.565800   58365 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:04:59.565852   58365 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 18:04:59.566059   58365 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 18:04:59.566098   58365 cni.go:84] Creating CNI manager for ""
	I0828 18:04:59.566113   58365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:04:59.566123   58365 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 18:04:59.566171   58365 start.go:340] cluster config:
	{Name:kubernetes-upgrade-502283 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-502283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:04:59.566261   58365 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:04:59.567936   58365 out.go:177] * Starting "kubernetes-upgrade-502283" primary control-plane node in "kubernetes-upgrade-502283" cluster
	I0828 18:04:59.568857   58365 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:04:59.568884   58365 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:04:59.568893   58365 cache.go:56] Caching tarball of preloaded images
	I0828 18:04:59.568958   58365 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:04:59.568968   58365 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0828 18:04:59.569065   58365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/config.json ...
	I0828 18:04:59.569083   58365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/config.json: {Name:mke3ad04e0a2d50224827c60a99fa10ae743385d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:04:59.569231   58365 start.go:360] acquireMachinesLock for kubernetes-upgrade-502283: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:05:23.962444   58365 start.go:364] duration metric: took 24.393166954s to acquireMachinesLock for "kubernetes-upgrade-502283"
	I0828 18:05:23.962501   58365 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-502283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-502283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:05:23.962647   58365 start.go:125] createHost starting for "" (driver="kvm2")
	I0828 18:05:23.964628   58365 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 18:05:23.964905   58365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:05:23.964960   58365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:05:23.982028   58365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43531
	I0828 18:05:23.982439   58365 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:05:23.983070   58365 main.go:141] libmachine: Using API Version  1
	I0828 18:05:23.983098   58365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:05:23.983501   58365 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:05:23.983737   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetMachineName
	I0828 18:05:23.983896   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .DriverName
	I0828 18:05:23.984085   58365 start.go:159] libmachine.API.Create for "kubernetes-upgrade-502283" (driver="kvm2")
	I0828 18:05:23.984116   58365 client.go:168] LocalClient.Create starting
	I0828 18:05:23.984164   58365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 18:05:23.984207   58365 main.go:141] libmachine: Decoding PEM data...
	I0828 18:05:23.984231   58365 main.go:141] libmachine: Parsing certificate...
	I0828 18:05:23.984322   58365 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 18:05:23.984365   58365 main.go:141] libmachine: Decoding PEM data...
	I0828 18:05:23.984380   58365 main.go:141] libmachine: Parsing certificate...
	I0828 18:05:23.984397   58365 main.go:141] libmachine: Running pre-create checks...
	I0828 18:05:23.984411   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .PreCreateCheck
	I0828 18:05:23.984806   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetConfigRaw
	I0828 18:05:23.985213   58365 main.go:141] libmachine: Creating machine...
	I0828 18:05:23.985225   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .Create
	I0828 18:05:23.985363   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Creating KVM machine...
	I0828 18:05:23.986681   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found existing default KVM network
	I0828 18:05:23.987964   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:23.987773   58871 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:67:48:a7} reservation:<nil>}
	I0828 18:05:23.989027   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:23.988944   58871 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002846a0}
	I0828 18:05:23.989048   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | created network xml: 
	I0828 18:05:23.989060   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | <network>
	I0828 18:05:23.989069   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG |   <name>mk-kubernetes-upgrade-502283</name>
	I0828 18:05:23.989081   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG |   <dns enable='no'/>
	I0828 18:05:23.989094   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG |   
	I0828 18:05:23.989114   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0828 18:05:23.989126   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG |     <dhcp>
	I0828 18:05:23.989138   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0828 18:05:23.989148   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG |     </dhcp>
	I0828 18:05:23.989158   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG |   </ip>
	I0828 18:05:23.989169   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG |   
	I0828 18:05:23.989179   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | </network>
	I0828 18:05:23.989195   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | 
	I0828 18:05:23.994382   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | trying to create private KVM network mk-kubernetes-upgrade-502283 192.168.50.0/24...
	I0828 18:05:24.069138   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | private KVM network mk-kubernetes-upgrade-502283 192.168.50.0/24 created
	I0828 18:05:24.069182   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:24.069091   58871 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:05:24.069193   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283 ...
	I0828 18:05:24.069208   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 18:05:24.069217   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 18:05:24.300358   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:24.300244   58871 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/id_rsa...
	I0828 18:05:24.442037   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:24.441887   58871 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/kubernetes-upgrade-502283.rawdisk...
	I0828 18:05:24.442065   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Writing magic tar header
	I0828 18:05:24.442106   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Writing SSH key tar header
	I0828 18:05:24.442126   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:24.442015   58871 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283 ...
	I0828 18:05:24.442154   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283
	I0828 18:05:24.442220   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283 (perms=drwx------)
	I0828 18:05:24.442245   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 18:05:24.442257   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 18:05:24.442272   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:05:24.442284   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 18:05:24.442301   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 18:05:24.442324   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Checking permissions on dir: /home/jenkins
	I0828 18:05:24.442342   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 18:05:24.442354   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Checking permissions on dir: /home
	I0828 18:05:24.442366   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Skipping /home - not owner
	I0828 18:05:24.442381   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 18:05:24.442394   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 18:05:24.442407   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 18:05:24.442422   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Creating domain...
	I0828 18:05:24.443583   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) define libvirt domain using xml: 
	I0828 18:05:24.443603   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) <domain type='kvm'>
	I0828 18:05:24.443627   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   <name>kubernetes-upgrade-502283</name>
	I0828 18:05:24.443642   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   <memory unit='MiB'>2200</memory>
	I0828 18:05:24.443681   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   <vcpu>2</vcpu>
	I0828 18:05:24.443698   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   <features>
	I0828 18:05:24.443704   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <acpi/>
	I0828 18:05:24.443711   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <apic/>
	I0828 18:05:24.443718   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <pae/>
	I0828 18:05:24.443725   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     
	I0828 18:05:24.443731   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   </features>
	I0828 18:05:24.443735   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   <cpu mode='host-passthrough'>
	I0828 18:05:24.443744   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   
	I0828 18:05:24.443749   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   </cpu>
	I0828 18:05:24.443756   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   <os>
	I0828 18:05:24.443762   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <type>hvm</type>
	I0828 18:05:24.443790   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <boot dev='cdrom'/>
	I0828 18:05:24.443813   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <boot dev='hd'/>
	I0828 18:05:24.443825   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <bootmenu enable='no'/>
	I0828 18:05:24.443847   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   </os>
	I0828 18:05:24.443859   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   <devices>
	I0828 18:05:24.443868   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <disk type='file' device='cdrom'>
	I0828 18:05:24.443881   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/boot2docker.iso'/>
	I0828 18:05:24.443895   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <target dev='hdc' bus='scsi'/>
	I0828 18:05:24.443907   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <readonly/>
	I0828 18:05:24.443918   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     </disk>
	I0828 18:05:24.443931   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <disk type='file' device='disk'>
	I0828 18:05:24.443943   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 18:05:24.443958   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/kubernetes-upgrade-502283.rawdisk'/>
	I0828 18:05:24.443970   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <target dev='hda' bus='virtio'/>
	I0828 18:05:24.443982   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     </disk>
	I0828 18:05:24.443993   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <interface type='network'>
	I0828 18:05:24.444007   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <source network='mk-kubernetes-upgrade-502283'/>
	I0828 18:05:24.444018   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <model type='virtio'/>
	I0828 18:05:24.444029   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     </interface>
	I0828 18:05:24.444044   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <interface type='network'>
	I0828 18:05:24.444053   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <source network='default'/>
	I0828 18:05:24.444060   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <model type='virtio'/>
	I0828 18:05:24.444072   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     </interface>
	I0828 18:05:24.444083   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <serial type='pty'>
	I0828 18:05:24.444095   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <target port='0'/>
	I0828 18:05:24.444109   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     </serial>
	I0828 18:05:24.444121   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <console type='pty'>
	I0828 18:05:24.444131   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <target type='serial' port='0'/>
	I0828 18:05:24.444137   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     </console>
	I0828 18:05:24.444148   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     <rng model='virtio'>
	I0828 18:05:24.444158   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)       <backend model='random'>/dev/random</backend>
	I0828 18:05:24.444169   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     </rng>
	I0828 18:05:24.444180   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     
	I0828 18:05:24.444192   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)     
	I0828 18:05:24.444207   58365 main.go:141] libmachine: (kubernetes-upgrade-502283)   </devices>
	I0828 18:05:24.444218   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) </domain>
	I0828 18:05:24.444224   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) 
	I0828 18:05:24.448135   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:38:46 in network default
	I0828 18:05:24.448718   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Ensuring networks are active...
	I0828 18:05:24.448737   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:24.449393   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Ensuring network default is active
	I0828 18:05:24.449728   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Ensuring network mk-kubernetes-upgrade-502283 is active
	I0828 18:05:24.450232   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Getting domain xml...
	I0828 18:05:24.450965   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Creating domain...
	I0828 18:05:25.664529   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Waiting to get IP...
	I0828 18:05:25.665353   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:25.665709   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:25.665737   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:25.665686   58871 retry.go:31] will retry after 189.171578ms: waiting for machine to come up
	I0828 18:05:25.856054   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:25.856529   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:25.856558   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:25.856489   58871 retry.go:31] will retry after 378.450234ms: waiting for machine to come up
	I0828 18:05:26.236093   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:26.236624   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:26.236681   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:26.236576   58871 retry.go:31] will retry after 325.656306ms: waiting for machine to come up
	I0828 18:05:26.564147   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:26.564618   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:26.564653   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:26.564567   58871 retry.go:31] will retry after 554.484932ms: waiting for machine to come up
	I0828 18:05:27.120311   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:27.120745   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:27.120773   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:27.120682   58871 retry.go:31] will retry after 696.903741ms: waiting for machine to come up
	I0828 18:05:27.819652   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:27.820097   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:27.820122   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:27.820032   58871 retry.go:31] will retry after 843.962077ms: waiting for machine to come up
	I0828 18:05:28.666312   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:28.666926   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:28.666958   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:28.666872   58871 retry.go:31] will retry after 818.835927ms: waiting for machine to come up
	I0828 18:05:29.487161   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:29.487680   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:29.487712   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:29.487641   58871 retry.go:31] will retry after 1.357525309s: waiting for machine to come up
	I0828 18:05:30.847276   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:30.847810   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:30.847840   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:30.847764   58871 retry.go:31] will retry after 1.698828341s: waiting for machine to come up
	I0828 18:05:32.548628   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:32.549182   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:32.549221   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:32.549146   58871 retry.go:31] will retry after 1.650547688s: waiting for machine to come up
	I0828 18:05:34.201375   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:34.201962   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:34.201991   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:34.201912   58871 retry.go:31] will retry after 2.883011989s: waiting for machine to come up
	I0828 18:05:37.087327   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:37.087842   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:37.087883   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:37.087797   58871 retry.go:31] will retry after 3.522524405s: waiting for machine to come up
	I0828 18:05:40.611628   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:40.612074   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find current IP address of domain kubernetes-upgrade-502283 in network mk-kubernetes-upgrade-502283
	I0828 18:05:40.612101   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | I0828 18:05:40.612037   58871 retry.go:31] will retry after 4.433445681s: waiting for machine to come up
	I0828 18:05:45.047133   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.047768   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Found IP for machine: 192.168.50.140
	I0828 18:05:45.047793   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has current primary IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.047802   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Reserving static IP address...
	I0828 18:05:45.048238   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-502283", mac: "52:54:00:07:04:81", ip: "192.168.50.140"} in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.125181   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Reserved static IP address: 192.168.50.140
	I0828 18:05:45.125213   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Waiting for SSH to be available...
	I0828 18:05:45.125222   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Getting to WaitForSSH function...
	I0828 18:05:45.127836   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.128248   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:minikube Clientid:01:52:54:00:07:04:81}
	I0828 18:05:45.128276   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.128463   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Using SSH client type: external
	I0828 18:05:45.128491   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/id_rsa (-rw-------)
	I0828 18:05:45.128534   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.140 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:05:45.128550   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | About to run SSH command:
	I0828 18:05:45.128584   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | exit 0
	I0828 18:05:45.262526   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | SSH cmd err, output: <nil>: 
	I0828 18:05:45.262820   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) KVM machine creation complete!
	I0828 18:05:45.263159   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetConfigRaw
	I0828 18:05:45.263814   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .DriverName
	I0828 18:05:45.264058   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .DriverName
	I0828 18:05:45.264239   58365 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 18:05:45.264256   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetState
	I0828 18:05:45.265753   58365 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 18:05:45.265770   58365 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 18:05:45.265778   58365 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 18:05:45.265787   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:05:45.268606   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.269064   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:45.269094   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.269300   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:05:45.269498   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:45.269679   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:45.269839   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:05:45.270008   58365 main.go:141] libmachine: Using SSH client type: native
	I0828 18:05:45.270313   58365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0828 18:05:45.270334   58365 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 18:05:45.385507   58365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:05:45.385535   58365 main.go:141] libmachine: Detecting the provisioner...
	I0828 18:05:45.385547   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:05:45.388551   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.388919   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:45.388953   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.389213   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:05:45.389442   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:45.389635   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:45.389813   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:05:45.390000   58365 main.go:141] libmachine: Using SSH client type: native
	I0828 18:05:45.390229   58365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0828 18:05:45.390242   58365 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 18:05:45.506876   58365 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 18:05:45.506975   58365 main.go:141] libmachine: found compatible host: buildroot
	I0828 18:05:45.506988   58365 main.go:141] libmachine: Provisioning with buildroot...
	I0828 18:05:45.507002   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetMachineName
	I0828 18:05:45.507269   58365 buildroot.go:166] provisioning hostname "kubernetes-upgrade-502283"
	I0828 18:05:45.507300   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetMachineName
	I0828 18:05:45.507524   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:05:45.509997   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.510370   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:45.510399   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.510503   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:05:45.510664   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:45.510767   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:45.510955   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:05:45.511155   58365 main.go:141] libmachine: Using SSH client type: native
	I0828 18:05:45.511321   58365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0828 18:05:45.511334   58365 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-502283 && echo "kubernetes-upgrade-502283" | sudo tee /etc/hostname
	I0828 18:05:45.635817   58365 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-502283
	
	I0828 18:05:45.635839   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:05:45.638762   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.639155   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:45.639184   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.639335   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:05:45.639523   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:45.639701   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:45.639822   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:05:45.639998   58365 main.go:141] libmachine: Using SSH client type: native
	I0828 18:05:45.640200   58365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0828 18:05:45.640217   58365 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-502283' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-502283/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-502283' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:05:45.761984   58365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:05:45.762016   58365 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:05:45.762063   58365 buildroot.go:174] setting up certificates
	I0828 18:05:45.762092   58365 provision.go:84] configureAuth start
	I0828 18:05:45.762107   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetMachineName
	I0828 18:05:45.762407   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetIP
	I0828 18:05:45.765114   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.765434   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:45.765454   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.765607   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:05:45.767772   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.768073   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:45.768102   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.768196   58365 provision.go:143] copyHostCerts
	I0828 18:05:45.768262   58365 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:05:45.768281   58365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:05:45.768345   58365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:05:45.768486   58365 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:05:45.768498   58365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:05:45.768557   58365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:05:45.768659   58365 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:05:45.768669   58365 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:05:45.768697   58365 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:05:45.768778   58365 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-502283 san=[127.0.0.1 192.168.50.140 kubernetes-upgrade-502283 localhost minikube]
	I0828 18:05:45.862478   58365 provision.go:177] copyRemoteCerts
	I0828 18:05:45.862556   58365 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:05:45.862587   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:05:45.865164   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.865502   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:45.865530   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:45.865687   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:05:45.865880   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:45.866028   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:05:45.866187   58365 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/id_rsa Username:docker}
	I0828 18:05:45.959336   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:05:45.988468   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0828 18:05:46.019776   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 18:05:46.051867   58365 provision.go:87] duration metric: took 289.761895ms to configureAuth
	I0828 18:05:46.051894   58365 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:05:46.052071   58365 config.go:182] Loaded profile config "kubernetes-upgrade-502283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:05:46.052166   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:05:46.055293   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.055643   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:46.055671   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.055859   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:05:46.056081   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:46.056252   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:46.056405   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:05:46.056578   58365 main.go:141] libmachine: Using SSH client type: native
	I0828 18:05:46.056777   58365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0828 18:05:46.056795   58365 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:05:46.304153   58365 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:05:46.304191   58365 main.go:141] libmachine: Checking connection to Docker...
	I0828 18:05:46.304203   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetURL
	I0828 18:05:46.305356   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Using libvirt version 6000000
	I0828 18:05:46.307735   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.308061   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:46.308081   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.308289   58365 main.go:141] libmachine: Docker is up and running!
	I0828 18:05:46.308301   58365 main.go:141] libmachine: Reticulating splines...
	I0828 18:05:46.308309   58365 client.go:171] duration metric: took 22.324179537s to LocalClient.Create
	I0828 18:05:46.308336   58365 start.go:167] duration metric: took 22.324249791s to libmachine.API.Create "kubernetes-upgrade-502283"
	I0828 18:05:46.308349   58365 start.go:293] postStartSetup for "kubernetes-upgrade-502283" (driver="kvm2")
	I0828 18:05:46.308365   58365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:05:46.308400   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .DriverName
	I0828 18:05:46.308692   58365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:05:46.308721   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:05:46.311014   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.311329   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:46.311358   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.311532   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:05:46.311702   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:46.311872   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:05:46.312039   58365 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/id_rsa Username:docker}
	I0828 18:05:46.396658   58365 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:05:46.400829   58365 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:05:46.400859   58365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:05:46.400941   58365 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:05:46.401017   58365 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:05:46.401100   58365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:05:46.410575   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:05:46.435249   58365 start.go:296] duration metric: took 126.882325ms for postStartSetup
	I0828 18:05:46.435308   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetConfigRaw
	I0828 18:05:46.436083   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetIP
	I0828 18:05:46.439001   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.439369   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:46.439398   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.439646   58365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/config.json ...
	I0828 18:05:46.439848   58365 start.go:128] duration metric: took 22.477189824s to createHost
	I0828 18:05:46.439872   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:05:46.442666   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.443031   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:46.443060   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.443239   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:05:46.443474   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:46.443688   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:46.444130   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:05:46.444323   58365 main.go:141] libmachine: Using SSH client type: native
	I0828 18:05:46.444546   58365 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.140 22 <nil> <nil>}
	I0828 18:05:46.444564   58365 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:05:46.558715   58365 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724868346.519386438
	
	I0828 18:05:46.558742   58365 fix.go:216] guest clock: 1724868346.519386438
	I0828 18:05:46.558753   58365 fix.go:229] Guest: 2024-08-28 18:05:46.519386438 +0000 UTC Remote: 2024-08-28 18:05:46.439860099 +0000 UTC m=+46.976058983 (delta=79.526339ms)
	I0828 18:05:46.558798   58365 fix.go:200] guest clock delta is within tolerance: 79.526339ms
	I0828 18:05:46.558806   58365 start.go:83] releasing machines lock for "kubernetes-upgrade-502283", held for 22.59632352s
	I0828 18:05:46.558854   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .DriverName
	I0828 18:05:46.559122   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetIP
	I0828 18:05:46.561858   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.562265   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:46.562300   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.562494   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .DriverName
	I0828 18:05:46.562946   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .DriverName
	I0828 18:05:46.563139   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .DriverName
	I0828 18:05:46.563226   58365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:05:46.563266   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:05:46.563475   58365 ssh_runner.go:195] Run: cat /version.json
	I0828 18:05:46.563499   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:05:46.566181   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.566512   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.566629   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:46.566650   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.566981   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:46.567000   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:46.567012   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:05:46.567222   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:05:46.567235   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:46.567389   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:05:46.567407   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:05:46.567604   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:05:46.567625   58365 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/id_rsa Username:docker}
	I0828 18:05:46.567740   58365 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/id_rsa Username:docker}
	I0828 18:05:46.693097   58365 ssh_runner.go:195] Run: systemctl --version
	I0828 18:05:46.699562   58365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:05:46.869809   58365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:05:46.876507   58365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:05:46.876593   58365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:05:46.899360   58365 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:05:46.899395   58365 start.go:495] detecting cgroup driver to use...
	I0828 18:05:46.899471   58365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:05:46.917418   58365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:05:46.939079   58365 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:05:46.939142   58365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:05:46.959911   58365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:05:46.976612   58365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:05:47.123196   58365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:05:47.322218   58365 docker.go:233] disabling docker service ...
	I0828 18:05:47.322289   58365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:05:47.336835   58365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:05:47.350301   58365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:05:47.486400   58365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:05:47.608199   58365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:05:47.623137   58365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:05:47.642702   58365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0828 18:05:47.642763   58365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:05:47.656829   58365 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:05:47.656892   58365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:05:47.667965   58365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:05:47.678519   58365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:05:47.688225   58365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:05:47.698891   58365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:05:47.709016   58365 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:05:47.709087   58365 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:05:47.729566   58365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:05:47.743626   58365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:05:47.876778   58365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:05:48.008105   58365 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:05:48.008185   58365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:05:48.014442   58365 start.go:563] Will wait 60s for crictl version
	I0828 18:05:48.014504   58365 ssh_runner.go:195] Run: which crictl
	I0828 18:05:48.018670   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:05:48.061386   58365 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:05:48.061508   58365 ssh_runner.go:195] Run: crio --version
	I0828 18:05:48.095499   58365 ssh_runner.go:195] Run: crio --version
	I0828 18:05:48.127729   58365 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0828 18:05:48.128861   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetIP
	I0828 18:05:48.131678   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:48.132027   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:05:48.132054   58365 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:05:48.132296   58365 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:05:48.136295   58365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:05:48.148881   58365 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-502283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-502283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:05:48.148991   58365 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:05:48.149041   58365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:05:48.184259   58365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:05:48.184334   58365 ssh_runner.go:195] Run: which lz4
	I0828 18:05:48.188309   58365 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:05:48.192533   58365 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:05:48.192570   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0828 18:05:49.745341   58365 crio.go:462] duration metric: took 1.557062241s to copy over tarball
	I0828 18:05:49.745433   58365 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:05:52.317524   58365 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.572056138s)
	I0828 18:05:52.317553   58365 crio.go:469] duration metric: took 2.572186501s to extract the tarball
	I0828 18:05:52.317582   58365 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:05:52.359913   58365 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:05:52.413598   58365 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:05:52.413627   58365 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:05:52.413705   58365 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:05:52.413740   58365 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:05:52.413767   58365 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0828 18:05:52.413805   58365 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0828 18:05:52.413816   58365 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:05:52.413837   58365 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:05:52.413744   58365 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:05:52.414292   58365 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:05:52.415586   58365 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:05:52.415701   58365 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:05:52.416309   58365 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:05:52.416422   58365 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0828 18:05:52.416590   58365 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:05:52.416728   58365 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:05:52.416779   58365 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0828 18:05:52.417331   58365 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:05:52.695941   58365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0828 18:05:52.711178   58365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:05:52.731128   58365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:05:52.731127   58365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0828 18:05:52.731143   58365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:05:52.735377   58365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0828 18:05:52.736371   58365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:05:52.737976   58365 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0828 18:05:52.738029   58365 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0828 18:05:52.738083   58365 ssh_runner.go:195] Run: which crictl
	I0828 18:05:52.812575   58365 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0828 18:05:52.812618   58365 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:05:52.812667   58365 ssh_runner.go:195] Run: which crictl
	I0828 18:05:52.835267   58365 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0828 18:05:52.835310   58365 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:05:52.835363   58365 ssh_runner.go:195] Run: which crictl
	I0828 18:05:52.839203   58365 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0828 18:05:52.839234   58365 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:05:52.839281   58365 ssh_runner.go:195] Run: which crictl
	I0828 18:05:52.880679   58365 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0828 18:05:52.880780   58365 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0828 18:05:52.880804   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:05:52.880822   58365 ssh_runner.go:195] Run: which crictl
	I0828 18:05:52.880826   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:05:52.880829   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:05:52.880787   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:05:52.880736   58365 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0828 18:05:52.880863   58365 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:05:52.880882   58365 ssh_runner.go:195] Run: which crictl
	I0828 18:05:52.880714   58365 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0828 18:05:52.880907   58365 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:05:52.880928   58365 ssh_runner.go:195] Run: which crictl
	I0828 18:05:52.963968   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:05:52.964074   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:05:52.964093   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:05:52.964163   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:05:52.964187   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:05:52.964232   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:05:52.964284   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:05:53.119717   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:05:53.119747   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:05:53.119754   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:05:53.119747   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:05:53.119806   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:05:53.119870   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:05:53.119886   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:05:53.289477   58365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0828 18:05:53.289525   58365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0828 18:05:53.289604   58365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0828 18:05:53.289605   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:05:53.289705   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:05:53.289715   58365 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:05:53.289783   58365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0828 18:05:53.335535   58365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0828 18:05:53.354584   58365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0828 18:05:53.354621   58365 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0828 18:05:53.629338   58365 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:05:53.776534   58365 cache_images.go:92] duration metric: took 1.362885723s to LoadCachedImages
	W0828 18:05:53.776659   58365 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0828 18:05:53.776679   58365 kubeadm.go:934] updating node { 192.168.50.140 8443 v1.20.0 crio true true} ...
	I0828 18:05:53.776807   58365 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-502283 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-502283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:05:53.776917   58365 ssh_runner.go:195] Run: crio config
	I0828 18:05:53.827247   58365 cni.go:84] Creating CNI manager for ""
	I0828 18:05:53.827274   58365 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:05:53.827285   58365 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:05:53.827309   58365 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.140 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-502283 NodeName:kubernetes-upgrade-502283 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0828 18:05:53.827497   58365 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-502283"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:05:53.827567   58365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0828 18:05:53.840644   58365 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:05:53.840718   58365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:05:53.853360   58365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0828 18:05:53.872136   58365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:05:53.890967   58365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0828 18:05:53.910130   58365 ssh_runner.go:195] Run: grep 192.168.50.140	control-plane.minikube.internal$ /etc/hosts
	I0828 18:05:53.914665   58365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:05:53.927333   58365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:05:54.046057   58365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:05:54.062559   58365 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283 for IP: 192.168.50.140
	I0828 18:05:54.062581   58365 certs.go:194] generating shared ca certs ...
	I0828 18:05:54.062597   58365 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:05:54.062736   58365 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:05:54.062773   58365 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:05:54.062782   58365 certs.go:256] generating profile certs ...
	I0828 18:05:54.062828   58365 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/client.key
	I0828 18:05:54.062852   58365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/client.crt with IP's: []
	I0828 18:05:54.143485   58365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/client.crt ...
	I0828 18:05:54.143516   58365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/client.crt: {Name:mkab487d9f88d23c4c2cd41c8189cd7879c2cc6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:05:54.143733   58365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/client.key ...
	I0828 18:05:54.143756   58365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/client.key: {Name:mk7eed74266495606c98bc073f6f848c790cceec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:05:54.143884   58365 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.key.2927eb79
	I0828 18:05:54.143909   58365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.crt.2927eb79 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.140]
	I0828 18:05:54.341850   58365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.crt.2927eb79 ...
	I0828 18:05:54.341880   58365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.crt.2927eb79: {Name:mkac2288414f0a2a32255f75113b92d42d8c06e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:05:54.342033   58365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.key.2927eb79 ...
	I0828 18:05:54.342046   58365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.key.2927eb79: {Name:mkaf401ea25e96220acf25746733dcbb65d898d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:05:54.342153   58365 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.crt.2927eb79 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.crt
	I0828 18:05:54.342248   58365 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.key.2927eb79 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.key
	I0828 18:05:54.342308   58365 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/proxy-client.key
	I0828 18:05:54.342323   58365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/proxy-client.crt with IP's: []
	I0828 18:05:54.527393   58365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/proxy-client.crt ...
	I0828 18:05:54.527424   58365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/proxy-client.crt: {Name:mk077975960afa2210cf447a49b914504087d5f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:05:54.527572   58365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/proxy-client.key ...
	I0828 18:05:54.527586   58365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/proxy-client.key: {Name:mk6413980fa369c78dbdb6ba1fae32915439c628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:05:54.527740   58365 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:05:54.527773   58365 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:05:54.527783   58365 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:05:54.527803   58365 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:05:54.527824   58365 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:05:54.527846   58365 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:05:54.527890   58365 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:05:54.528518   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:05:54.552859   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:05:54.575455   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:05:54.600895   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:05:54.626021   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0828 18:05:54.655612   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:05:54.679541   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:05:54.703751   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 18:05:54.729466   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:05:54.753044   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:05:54.776879   58365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:05:54.801584   58365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:05:54.817913   58365 ssh_runner.go:195] Run: openssl version
	I0828 18:05:54.823517   58365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:05:54.834039   58365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:05:54.838201   58365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:05:54.838253   58365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:05:54.843810   58365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:05:54.854592   58365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:05:54.865472   58365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:05:54.869868   58365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:05:54.869928   58365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:05:54.875437   58365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:05:54.889003   58365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:05:54.900433   58365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:05:54.905363   58365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:05:54.905425   58365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:05:54.911294   58365 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:05:54.923524   58365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:05:54.932516   58365 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 18:05:54.932602   58365 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-502283 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-502283 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:05:54.932679   58365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:05:54.932753   58365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:05:54.988677   58365 cri.go:89] found id: ""
	I0828 18:05:54.988753   58365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:05:55.004544   58365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:05:55.015305   58365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:05:55.024342   58365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:05:55.024366   58365 kubeadm.go:157] found existing configuration files:
	
	I0828 18:05:55.024422   58365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:05:55.034379   58365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:05:55.034438   58365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:05:55.043518   58365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:05:55.052228   58365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:05:55.052295   58365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:05:55.061411   58365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:05:55.071268   58365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:05:55.071343   58365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:05:55.081883   58365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:05:55.090731   58365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:05:55.090797   58365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:05:55.101225   58365 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:05:55.234998   58365 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:05:55.235094   58365 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:05:55.371650   58365 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:05:55.371814   58365 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:05:55.371987   58365 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:05:55.553618   58365 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:05:55.693619   58365 out.go:235]   - Generating certificates and keys ...
	I0828 18:05:55.693775   58365 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:05:55.693907   58365 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:05:55.694026   58365 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 18:05:55.976243   58365 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 18:05:56.169784   58365 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 18:05:56.334510   58365 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 18:05:56.421798   58365 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 18:05:56.422057   58365 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-502283 localhost] and IPs [192.168.50.140 127.0.0.1 ::1]
	I0828 18:05:56.529924   58365 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 18:05:56.538178   58365 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-502283 localhost] and IPs [192.168.50.140 127.0.0.1 ::1]
	I0828 18:05:56.647611   58365 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 18:05:56.727801   58365 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 18:05:57.281549   58365 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 18:05:57.281704   58365 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:05:57.391331   58365 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:05:57.823279   58365 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:05:58.265257   58365 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:05:58.563325   58365 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:05:58.585870   58365 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:05:58.586026   58365 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:05:58.586135   58365 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:05:58.747594   58365 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:05:58.750205   58365 out.go:235]   - Booting up control plane ...
	I0828 18:05:58.750339   58365 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:05:58.759466   58365 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:05:58.760747   58365 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:05:58.761835   58365 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:05:58.777822   58365 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:06:38.746881   58365 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:06:38.747146   58365 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:06:38.747343   58365 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:06:43.746933   58365 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:06:43.747137   58365 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:06:53.745989   58365 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:06:53.746261   58365 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:07:13.746819   58365 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:07:13.747070   58365 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:07:53.746452   58365 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:07:53.746942   58365 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:07:53.746955   58365 kubeadm.go:310] 
	I0828 18:07:53.747055   58365 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:07:53.747141   58365 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:07:53.747160   58365 kubeadm.go:310] 
	I0828 18:07:53.747267   58365 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:07:53.747357   58365 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:07:53.747632   58365 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:07:53.747659   58365 kubeadm.go:310] 
	I0828 18:07:53.747905   58365 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:07:53.748021   58365 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:07:53.748101   58365 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:07:53.748110   58365 kubeadm.go:310] 
	I0828 18:07:53.748316   58365 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:07:53.748466   58365 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:07:53.748477   58365 kubeadm.go:310] 
	I0828 18:07:53.748672   58365 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:07:53.748877   58365 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:07:53.749068   58365 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:07:53.749336   58365 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:07:53.749357   58365 kubeadm.go:310] 
	I0828 18:07:53.749582   58365 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:07:53.749721   58365 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:07:53.749814   58365 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0828 18:07:53.750266   58365 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-502283 localhost] and IPs [192.168.50.140 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-502283 localhost] and IPs [192.168.50.140 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-502283 localhost] and IPs [192.168.50.140 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-502283 localhost] and IPs [192.168.50.140 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0828 18:07:53.750318   58365 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:07:54.224665   58365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:07:54.242250   58365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:07:54.252119   58365 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:07:54.252141   58365 kubeadm.go:157] found existing configuration files:
	
	I0828 18:07:54.252197   58365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:07:54.265018   58365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:07:54.265089   58365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:07:54.276338   58365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:07:54.285413   58365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:07:54.285485   58365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:07:54.295022   58365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:07:54.303928   58365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:07:54.303986   58365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:07:54.313186   58365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:07:54.321955   58365 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:07:54.322013   58365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:07:54.331103   58365 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:07:54.410249   58365 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:07:54.414464   58365 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:07:54.571807   58365 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:07:54.571941   58365 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:07:54.572066   58365 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:07:54.781409   58365 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:07:54.783098   58365 out.go:235]   - Generating certificates and keys ...
	I0828 18:07:54.783195   58365 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:07:54.783284   58365 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:07:54.783417   58365 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:07:54.783516   58365 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:07:54.783623   58365 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:07:54.783711   58365 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:07:54.783805   58365 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:07:54.783981   58365 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:07:54.784362   58365 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:07:54.784588   58365 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:07:54.784659   58365 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:07:54.784722   58365 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:07:54.914140   58365 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:07:55.234243   58365 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:07:55.320149   58365 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:07:55.408467   58365 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:07:55.439081   58365 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:07:55.440609   58365 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:07:55.440689   58365 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:07:55.605939   58365 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:07:55.607231   58365 out.go:235]   - Booting up control plane ...
	I0828 18:07:55.607347   58365 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:07:55.615932   58365 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:07:55.616042   58365 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:07:55.619058   58365 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:07:55.621045   58365 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:08:35.623929   58365 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:08:35.624026   58365 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:08:35.624252   58365 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:08:40.624588   58365 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:08:40.624867   58365 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:08:50.625874   58365 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:08:50.626140   58365 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:09:10.627522   58365 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:09:10.627768   58365 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:09:50.627126   58365 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:09:50.627412   58365 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:09:50.627434   58365 kubeadm.go:310] 
	I0828 18:09:50.627484   58365 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:09:50.627542   58365 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:09:50.627554   58365 kubeadm.go:310] 
	I0828 18:09:50.627609   58365 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:09:50.627658   58365 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:09:50.627815   58365 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:09:50.627824   58365 kubeadm.go:310] 
	I0828 18:09:50.628002   58365 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:09:50.628059   58365 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:09:50.628104   58365 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:09:50.628111   58365 kubeadm.go:310] 
	I0828 18:09:50.628259   58365 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:09:50.628412   58365 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:09:50.628444   58365 kubeadm.go:310] 
	I0828 18:09:50.628610   58365 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:09:50.628728   58365 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:09:50.628830   58365 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:09:50.628924   58365 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:09:50.628935   58365 kubeadm.go:310] 
	I0828 18:09:50.629874   58365 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:09:50.629999   58365 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:09:50.630107   58365 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:09:50.630203   58365 kubeadm.go:394] duration metric: took 3m55.697607556s to StartCluster
	I0828 18:09:50.630267   58365 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:09:50.630329   58365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:09:50.687131   58365 cri.go:89] found id: ""
	I0828 18:09:50.687166   58365 logs.go:276] 0 containers: []
	W0828 18:09:50.687177   58365 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:09:50.687184   58365 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:09:50.687267   58365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:09:50.727196   58365 cri.go:89] found id: ""
	I0828 18:09:50.727230   58365 logs.go:276] 0 containers: []
	W0828 18:09:50.727238   58365 logs.go:278] No container was found matching "etcd"
	I0828 18:09:50.727252   58365 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:09:50.727304   58365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:09:50.765039   58365 cri.go:89] found id: ""
	I0828 18:09:50.765070   58365 logs.go:276] 0 containers: []
	W0828 18:09:50.765081   58365 logs.go:278] No container was found matching "coredns"
	I0828 18:09:50.765089   58365 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:09:50.765149   58365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:09:50.806798   58365 cri.go:89] found id: ""
	I0828 18:09:50.806824   58365 logs.go:276] 0 containers: []
	W0828 18:09:50.806833   58365 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:09:50.806841   58365 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:09:50.806895   58365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:09:50.860450   58365 cri.go:89] found id: ""
	I0828 18:09:50.860481   58365 logs.go:276] 0 containers: []
	W0828 18:09:50.860491   58365 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:09:50.860498   58365 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:09:50.860553   58365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:09:50.903878   58365 cri.go:89] found id: ""
	I0828 18:09:50.903902   58365 logs.go:276] 0 containers: []
	W0828 18:09:50.903909   58365 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:09:50.903915   58365 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:09:50.903971   58365 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:09:50.952483   58365 cri.go:89] found id: ""
	I0828 18:09:50.952503   58365 logs.go:276] 0 containers: []
	W0828 18:09:50.952512   58365 logs.go:278] No container was found matching "kindnet"
	I0828 18:09:50.952520   58365 logs.go:123] Gathering logs for kubelet ...
	I0828 18:09:50.952533   58365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:09:51.003907   58365 logs.go:123] Gathering logs for dmesg ...
	I0828 18:09:51.003943   58365 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:09:51.018531   58365 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:09:51.018559   58365 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:09:51.185396   58365 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:09:51.185422   58365 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:09:51.185439   58365 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:09:51.320837   58365 logs.go:123] Gathering logs for container status ...
	I0828 18:09:51.320945   58365 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0828 18:09:51.376220   58365 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0828 18:09:51.376281   58365 out.go:270] * 
	* 
	W0828 18:09:51.376345   58365 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:09:51.376365   58365 out.go:270] * 
	* 
	W0828 18:09:51.377535   58365 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:09:51.381268   58365 out.go:201] 
	W0828 18:09:51.382382   58365 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:09:51.382438   58365 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0828 18:09:51.382472   58365 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0828 18:09:51.383869   58365 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-502283 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-502283
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-502283: (2.340171485s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-502283 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-502283 status --format={{.Host}}: exit status 7 (84.861751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-502283 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-502283 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.98568783s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-502283 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-502283 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-502283 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.529943ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-502283] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-502283
	    minikube start -p kubernetes-upgrade-502283 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5022832 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-502283 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-502283 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-502283 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.893965818s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-28 18:11:37.909635012 +0000 UTC m=+4816.960196994
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-502283 -n kubernetes-upgrade-502283
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-502283 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-502283 logs -n 25: (1.575253103s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068                             | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068                             | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068                             | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068                             | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068                             | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo cat                    | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo cat                    | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068                             | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo cat                    | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068                             | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-647068 sudo                        | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-647068                             | custom-flannel-647068 | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC | 28 Aug 24 18:11 UTC |
	| start   | -p bridge-647068 --memory=3072                       | bridge-647068         | jenkins | v1.33.1 | 28 Aug 24 18:11 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 18:11:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 18:11:33.670384   68931 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:11:33.670519   68931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:11:33.670528   68931 out.go:358] Setting ErrFile to fd 2...
	I0828 18:11:33.670533   68931 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:11:33.670729   68931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:11:33.671317   68931 out.go:352] Setting JSON to false
	I0828 18:11:33.672443   68931 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6840,"bootTime":1724861854,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:11:33.672504   68931 start.go:139] virtualization: kvm guest
	I0828 18:11:33.674119   68931 out.go:177] * [bridge-647068] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:11:33.675789   68931 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:11:33.675850   68931 notify.go:220] Checking for updates...
	I0828 18:11:33.678211   68931 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:11:33.679494   68931 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:11:33.680840   68931 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:11:33.682491   68931 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:11:33.683754   68931 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:11:33.685625   68931 config.go:182] Loaded profile config "enable-default-cni-647068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:11:33.685765   68931 config.go:182] Loaded profile config "flannel-647068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:11:33.685878   68931 config.go:182] Loaded profile config "kubernetes-upgrade-502283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:11:33.685987   68931 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:11:33.727880   68931 out.go:177] * Using the kvm2 driver based on user configuration
	I0828 18:11:33.729245   68931 start.go:297] selected driver: kvm2
	I0828 18:11:33.729266   68931 start.go:901] validating driver "kvm2" against <nil>
	I0828 18:11:33.729281   68931 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:11:33.730247   68931 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:11:33.730340   68931 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:11:33.747262   68931 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:11:33.747311   68931 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 18:11:33.747595   68931 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:11:33.747638   68931 cni.go:84] Creating CNI manager for "bridge"
	I0828 18:11:33.747650   68931 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 18:11:33.747728   68931 start.go:340] cluster config:
	{Name:bridge-647068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-647068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:11:33.747863   68931 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:11:33.749457   68931 out.go:177] * Starting "bridge-647068" primary control-plane node in "bridge-647068" cluster
	I0828 18:11:30.904521   67410 main.go:141] libmachine: (flannel-647068) DBG | domain flannel-647068 has defined MAC address 52:54:00:f0:25:16 in network mk-flannel-647068
	I0828 18:11:30.905083   67410 main.go:141] libmachine: (flannel-647068) DBG | unable to find current IP address of domain flannel-647068 in network mk-flannel-647068
	I0828 18:11:30.905111   67410 main.go:141] libmachine: (flannel-647068) DBG | I0828 18:11:30.905045   67439 retry.go:31] will retry after 1.80142682s: waiting for machine to come up
	I0828 18:11:32.708550   67410 main.go:141] libmachine: (flannel-647068) DBG | domain flannel-647068 has defined MAC address 52:54:00:f0:25:16 in network mk-flannel-647068
	I0828 18:11:32.709428   67410 main.go:141] libmachine: (flannel-647068) DBG | unable to find current IP address of domain flannel-647068 in network mk-flannel-647068
	I0828 18:11:32.709455   67410 main.go:141] libmachine: (flannel-647068) DBG | I0828 18:11:32.709370   67439 retry.go:31] will retry after 2.481841884s: waiting for machine to come up
	I0828 18:11:31.386282   65142 pod_ready.go:103] pod "coredns-6f6b679f8f-b65m5" in "kube-system" namespace has status "Ready":"False"
	I0828 18:11:33.868807   65142 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-b65m5" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-b65m5" not found
	I0828 18:11:33.868835   65142 pod_ready.go:82] duration metric: took 11.503044328s for pod "coredns-6f6b679f8f-b65m5" in "kube-system" namespace to be "Ready" ...
	E0828 18:11:33.868848   65142 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-b65m5" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-b65m5" not found
	I0828 18:11:33.868857   65142 pod_ready.go:79] waiting up to 15m0s for pod "coredns-6f6b679f8f-s6d6v" in "kube-system" namespace to be "Ready" ...
	I0828 18:11:31.245841   65783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:11:31.343948   65783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:11:31.468690   65783 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:11:31.468781   65783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:11:31.969494   65783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:11:32.469254   65783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:11:32.484564   65783 api_server.go:72] duration metric: took 1.015887056s to wait for apiserver process to appear ...
	I0828 18:11:32.484594   65783 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:11:32.484616   65783 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0828 18:11:34.959261   65783 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:11:34.959293   65783 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:11:34.959305   65783 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0828 18:11:34.990867   65783 api_server.go:279] https://192.168.50.140:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:11:34.990905   65783 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:11:34.990921   65783 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0828 18:11:35.017105   65783 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:11:35.017142   65783 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:11:35.484660   65783 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0828 18:11:35.491307   65783 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:11:35.491336   65783 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:11:35.984753   65783 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0828 18:11:35.999455   65783 api_server.go:279] https://192.168.50.140:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:11:35.999486   65783 api_server.go:103] status: https://192.168.50.140:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:11:36.485184   65783 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0828 18:11:36.489223   65783 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0828 18:11:36.495135   65783 api_server.go:141] control plane version: v1.31.0
	I0828 18:11:36.495156   65783 api_server.go:131] duration metric: took 4.01055538s to wait for apiserver health ...
	I0828 18:11:36.495164   65783 cni.go:84] Creating CNI manager for ""
	I0828 18:11:36.495170   65783 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:11:36.497047   65783 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:11:36.498434   65783 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:11:36.509028   65783 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:11:36.525103   65783 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:11:36.525174   65783 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0828 18:11:36.525193   65783 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0828 18:11:36.535777   65783 system_pods.go:59] 8 kube-system pods found
	I0828 18:11:36.535816   65783 system_pods.go:61] "coredns-6f6b679f8f-5pwgc" [02fda28a-8802-4472-b46b-f6d10e680c6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:11:36.535842   65783 system_pods.go:61] "coredns-6f6b679f8f-glsln" [4614e572-0a8f-409e-b964-cee6cb7e328f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:11:36.535853   65783 system_pods.go:61] "etcd-kubernetes-upgrade-502283" [bdbb413c-b06f-46b1-8281-35056772ab93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:11:36.535866   65783 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-502283" [7f1427d4-fd79-4d16-af07-fd2e734cc364] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:11:36.535878   65783 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-502283" [f7497e86-790c-49c0-884b-ba52f6138b75] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:11:36.535892   65783 system_pods.go:61] "kube-proxy-v4cz8" [414642eb-551c-40f0-bef1-ef01a1bd6f33] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:11:36.535902   65783 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-502283" [bdc692fa-bb8a-424f-8984-6d4f55b3f60e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:11:36.535920   65783 system_pods.go:61] "storage-provisioner" [8d0e3f80-e073-40ab-a28a-6250bcf1b817] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:11:36.535932   65783 system_pods.go:74] duration metric: took 10.807682ms to wait for pod list to return data ...
	I0828 18:11:36.535945   65783 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:11:36.539263   65783 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:11:36.539291   65783 node_conditions.go:123] node cpu capacity is 2
	I0828 18:11:36.539304   65783 node_conditions.go:105] duration metric: took 3.350876ms to run NodePressure ...
	I0828 18:11:36.539323   65783 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:11:36.847997   65783 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:11:36.859539   65783 ops.go:34] apiserver oom_adj: -16
	I0828 18:11:36.859560   65783 kubeadm.go:597] duration metric: took 22.169816113s to restartPrimaryControlPlane
	I0828 18:11:36.859570   65783 kubeadm.go:394] duration metric: took 22.285603177s to StartCluster
	I0828 18:11:36.859586   65783 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:11:36.859671   65783 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:11:36.860619   65783 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:11:36.860853   65783 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.140 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:11:36.860921   65783 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:11:36.860995   65783 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-502283"
	I0828 18:11:36.861023   65783 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-502283"
	W0828 18:11:36.861034   65783 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:11:36.861013   65783 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-502283"
	I0828 18:11:36.861052   65783 config.go:182] Loaded profile config "kubernetes-upgrade-502283": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:11:36.861062   65783 host.go:66] Checking if "kubernetes-upgrade-502283" exists ...
	I0828 18:11:36.861086   65783 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-502283"
	I0828 18:11:36.861326   65783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:11:36.861353   65783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:11:36.861609   65783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:11:36.861657   65783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:11:36.862576   65783 out.go:177] * Verifying Kubernetes components...
	I0828 18:11:36.864044   65783 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:11:36.878210   65783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40853
	I0828 18:11:36.878778   65783 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:11:36.879388   65783 main.go:141] libmachine: Using API Version  1
	I0828 18:11:36.879421   65783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:11:36.879811   65783 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:11:36.880046   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetState
	I0828 18:11:36.881555   65783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
	I0828 18:11:36.882012   65783 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:11:36.882481   65783 main.go:141] libmachine: Using API Version  1
	I0828 18:11:36.882506   65783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:11:36.882783   65783 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:11:36.883091   65783 kapi.go:59] client config for kubernetes-upgrade-502283: &rest.Config{Host:"https://192.168.50.140:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/client.crt", KeyFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kubernetes-upgrade-502283/client.key", CAFile:"/home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil
), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f19080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0828 18:11:36.883318   65783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:11:36.883360   65783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:11:36.883422   65783 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-502283"
	W0828 18:11:36.883448   65783 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:11:36.883476   65783 host.go:66] Checking if "kubernetes-upgrade-502283" exists ...
	I0828 18:11:36.883812   65783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:11:36.883850   65783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:11:36.898749   65783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33939
	I0828 18:11:36.899211   65783 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:11:36.899718   65783 main.go:141] libmachine: Using API Version  1
	I0828 18:11:36.899747   65783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:11:36.900164   65783 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:11:36.900355   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetState
	I0828 18:11:36.902069   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .DriverName
	I0828 18:11:36.902906   65783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0828 18:11:36.903293   65783 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:11:36.903897   65783 main.go:141] libmachine: Using API Version  1
	I0828 18:11:36.903922   65783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:11:36.904024   65783 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:11:36.904359   65783 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:11:36.904846   65783 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:11:36.904888   65783 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:11:36.905623   65783 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:11:36.905640   65783 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:11:36.905657   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:11:36.909236   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:11:36.909753   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:11:36.909800   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:11:36.909967   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:11:36.910240   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:11:36.910449   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:11:36.910626   65783 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/id_rsa Username:docker}
	I0828 18:11:36.922274   65783 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34021
	I0828 18:11:36.922765   65783 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:11:36.923255   65783 main.go:141] libmachine: Using API Version  1
	I0828 18:11:36.923283   65783 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:11:36.923620   65783 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:11:36.923827   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetState
	I0828 18:11:36.925596   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .DriverName
	I0828 18:11:36.925793   65783 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:11:36.925810   65783 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:11:36.925827   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHHostname
	I0828 18:11:36.928932   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:11:36.929403   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:04:81", ip: ""} in network mk-kubernetes-upgrade-502283: {Iface:virbr4 ExpiryTime:2024-08-28 19:05:38 +0000 UTC Type:0 Mac:52:54:00:07:04:81 Iaid: IPaddr:192.168.50.140 Prefix:24 Hostname:kubernetes-upgrade-502283 Clientid:01:52:54:00:07:04:81}
	I0828 18:11:36.929430   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | domain kubernetes-upgrade-502283 has defined IP address 192.168.50.140 and MAC address 52:54:00:07:04:81 in network mk-kubernetes-upgrade-502283
	I0828 18:11:36.929567   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHPort
	I0828 18:11:36.929757   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHKeyPath
	I0828 18:11:36.929936   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .GetSSHUsername
	I0828 18:11:36.930110   65783 sshutil.go:53] new ssh client: &{IP:192.168.50.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/kubernetes-upgrade-502283/id_rsa Username:docker}
	I0828 18:11:37.048086   65783 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:11:37.074400   65783 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:11:37.074493   65783 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:11:37.093321   65783 api_server.go:72] duration metric: took 232.436836ms to wait for apiserver process to appear ...
	I0828 18:11:37.093352   65783 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:11:37.093374   65783 api_server.go:253] Checking apiserver healthz at https://192.168.50.140:8443/healthz ...
	I0828 18:11:37.100428   65783 api_server.go:279] https://192.168.50.140:8443/healthz returned 200:
	ok
	I0828 18:11:37.102062   65783 api_server.go:141] control plane version: v1.31.0
	I0828 18:11:37.102091   65783 api_server.go:131] duration metric: took 8.731177ms to wait for apiserver health ...
	I0828 18:11:37.102101   65783 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:11:37.123517   65783 system_pods.go:59] 8 kube-system pods found
	I0828 18:11:37.123545   65783 system_pods.go:61] "coredns-6f6b679f8f-5pwgc" [02fda28a-8802-4472-b46b-f6d10e680c6e] Running
	I0828 18:11:37.123553   65783 system_pods.go:61] "coredns-6f6b679f8f-glsln" [4614e572-0a8f-409e-b964-cee6cb7e328f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:11:37.123561   65783 system_pods.go:61] "etcd-kubernetes-upgrade-502283" [bdbb413c-b06f-46b1-8281-35056772ab93] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:11:37.123569   65783 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-502283" [7f1427d4-fd79-4d16-af07-fd2e734cc364] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:11:37.123576   65783 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-502283" [f7497e86-790c-49c0-884b-ba52f6138b75] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:11:37.123581   65783 system_pods.go:61] "kube-proxy-v4cz8" [414642eb-551c-40f0-bef1-ef01a1bd6f33] Running
	I0828 18:11:37.123587   65783 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-502283" [bdc692fa-bb8a-424f-8984-6d4f55b3f60e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:11:37.123591   65783 system_pods.go:61] "storage-provisioner" [8d0e3f80-e073-40ab-a28a-6250bcf1b817] Running
	I0828 18:11:37.123599   65783 system_pods.go:74] duration metric: took 21.489585ms to wait for pod list to return data ...
	I0828 18:11:37.123610   65783 kubeadm.go:582] duration metric: took 262.730942ms to wait for: map[apiserver:true system_pods:true]
	I0828 18:11:37.123624   65783 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:11:37.126333   65783 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:11:37.126363   65783 node_conditions.go:123] node cpu capacity is 2
	I0828 18:11:37.126373   65783 node_conditions.go:105] duration metric: took 2.744655ms to run NodePressure ...
	I0828 18:11:37.126384   65783 start.go:241] waiting for startup goroutines ...
	I0828 18:11:37.132493   65783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:11:37.133652   65783 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:11:37.823112   65783 main.go:141] libmachine: Making call to close driver server
	I0828 18:11:37.823144   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .Close
	I0828 18:11:37.823152   65783 main.go:141] libmachine: Making call to close driver server
	I0828 18:11:37.823166   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .Close
	I0828 18:11:37.823441   65783 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:11:37.823461   65783 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:11:37.823473   65783 main.go:141] libmachine: Making call to close driver server
	I0828 18:11:37.823481   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .Close
	I0828 18:11:37.823501   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Closing plugin on server side
	I0828 18:11:37.823514   65783 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:11:37.823527   65783 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:11:37.823543   65783 main.go:141] libmachine: Making call to close driver server
	I0828 18:11:37.823551   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .Close
	I0828 18:11:37.823670   65783 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:11:37.823679   65783 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:11:37.823751   65783 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:11:37.823765   65783 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:11:37.830226   65783 main.go:141] libmachine: Making call to close driver server
	I0828 18:11:37.830242   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) Calling .Close
	I0828 18:11:37.830493   65783 main.go:141] libmachine: (kubernetes-upgrade-502283) DBG | Closing plugin on server side
	I0828 18:11:37.830517   65783 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:11:37.830551   65783 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:11:37.832735   65783 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0828 18:11:37.834059   65783 addons.go:510] duration metric: took 973.147968ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0828 18:11:37.834109   65783 start.go:246] waiting for cluster config update ...
	I0828 18:11:37.834120   65783 start.go:255] writing updated cluster config ...
	I0828 18:11:37.834343   65783 ssh_runner.go:195] Run: rm -f paused
	I0828 18:11:37.891973   65783 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:11:37.893960   65783 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-502283" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.635098871Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc4d33d6-47f6-41fd-bebb-e8619c4cfa11 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.637213935Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb9cb9a8-b8d0-487f-bdbc-3bc1067b890f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.637587499Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724868698637561086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb9cb9a8-b8d0-487f-bdbc-3bc1067b890f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.638295597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e08e7d4a-8d2f-42ec-af95-7329c577bd30 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.638360578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e08e7d4a-8d2f-42ec-af95-7329c577bd30 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.638868309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5baf19572c649666b604bec0be86404c7d35eb94a3f965503e89afd015383263,PodSandboxId:ebb2197aa54cc059a6a81b0c3580010c108a4b31ddbfa8a39db0370054101ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724868695738795186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-glsln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4614e572-0a8f-409e-b964-cee6cb7e328f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f67314f6c0a6cd58a06125a0f0a0a003d306d60fd13429c2e71a5b24263ecf,PodSandboxId:654b750f207a79a2b6e3db8fec4305e25efec65c984ff1bdc6f54adb5807a859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724868695725234057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 8d0e3f80-e073-40ab-a28a-6250bcf1b817,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9efe7eae54089ad44394ff64e5c1e5076632f8683c017c078c95af4dc6a3d14d,PodSandboxId:54c039216ff1bdfdb90c2ccc8b860ec5dcf304bc8aa78733122cb261fdfd26e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724868695724262319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v4cz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 414642eb-551c-40f0-bef1-ef01a1bd6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67456e52920e89865d4f031e316a23f93f5a12bcd3face32302bec5665a41678,PodSandboxId:39fa4be07dec384840fd5c205737b8295bfac59cd91eadb6b82755bf11bede1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724868695715279418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pwgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02fda28a-8802-4472-b46b-f6d
10e680c6e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb5589dc2c295d0b1f1cae587e743f3a08cad66b327cb379b7625afcdd9fa485,PodSandboxId:e96f46879267fda5c59717951de665c4e4f3349c2c07fa1a6689aa580f69b0d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172486869187
8145540,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f109f9805cdd51e2f6dab6c14ef31ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f77e5499ac5f8963948196fd07fa1449275bd58c47b254dab572cfd5ae01d8,PodSandboxId:563b952d6c500b8243f8a5e7ca2008a44e0a7f93c7fcabf39c3c3cf7ac015cb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt
:1724868691900728655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96076efd91712895c130cee6f11cf9f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb162ff6424aa1cb6cc4c5855aebfce53c86732dd7cb0566ab0b76fec5f6c2a,PodSandboxId:a4635991dad1dba88addb07f67ea81cdd74d314f883c8aeb2492365e91539900,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724
868691874265414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7eac52f7f4eea6227ce001622f7616,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e15e75f361238c3a98cb702d74ac58f499a3c57196c6bd1ba8d137b8d960da67,PodSandboxId:86f648ebc9c6746b429b655a3bb98e050d36ad4c79425a09e5a3ec310a2a17ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724868691866452217
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5463fd9ce22905bca19ea83bc2c9d4d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44aa9d4df94271f7fcdfea9471e79e90326b0b3f5027b7348418377676f4f7ca,PodSandboxId:654b750f207a79a2b6e3db8fec4305e25efec65c984ff1bdc6f54adb5807a859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724868687713560175,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e3f80-e073-40ab-a28a-6250bcf1b817,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453bd89ed2f30b49e86e0fc717094e1c3f47bad6bd1326d2c8b46c6f5878839b,PodSandboxId:39fa4be07dec384840fd5c205737b8295bfac59cd91eadb6b82755bf11bede1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724868674182336917,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pwgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02fda28a-8802-4472-b46b-f6d10e680c6e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f851d05034284a59e7ab87ec6b95b6321f194f19c14b6946fe7937d017cc6c3,PodSandboxId:ebb2197aa54cc059a6a81b0c3580010c108a4b31ddbfa8a39db0370054101ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724868674165542904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-glsln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4614e572-0a8f-409e-b964-cee6cb7e328f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff6fbc84257f6a2748f74d93548e999251f3ede50cb0e1e95106f2cc67b5375,PodSandboxId:4ebe19e40d51e7c3c764c4ea2c7a15bc7269115e3fccd93ba3b8117f4b31
ec59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724868670886681544,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v4cz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 414642eb-551c-40f0-bef1-ef01a1bd6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa011a3c7b8de0f98489e35f6aeaa84eb8a91224a6ca4cca2986e99e700eb6,PodSandboxId:241e8368560fa985dda8656a8831c8cd7f773cc3726783df37ed94935f28b00f,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724868670820352578,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96076efd91712895c130cee6f11cf9f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ff6bd4425ef6aa7d9a3469eeb6e49b6b9cf09f59fb66f252dd4d260f625ad3,PodSandboxId:27c478ec91d436fd5a7947b5dbb43bf14ab13ff989a4fa5e15505b58f6cdc75f,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724868670748794656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f109f9805cdd51e2f6dab6c14ef31ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080068210fdd7abebbcd1ef419bbac91b2d1bcad696752649d1cf7c3659964b4,PodSandboxId:5fd02bfce8c55b5e6a4b5b5b36145eb8b548da28bff8f12a43358db0921cebe0,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724868670796872099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5463fd9ce22905bca19ea83bc2c9d4d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654e1633c9f632c77116bac9a2a26c95d9c375c296cebc7517723eb3a2245eae,PodSandboxId:c4d416a4d721e0df68b457511adc7609bc522554965397438c9c5740c518d6e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724868670502282551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7eac52f7f4eea6227ce001622f7616,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e08e7d4a-8d2f-42ec-af95-7329c577bd30 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.649102437Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=9b2723fd-50e9-429e-bc1f-6e7e129f9166 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.649889605Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ebb2197aa54cc059a6a81b0c3580010c108a4b31ddbfa8a39db0370054101ce8,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-glsln,Uid:4614e572-0a8f-409e-b964-cee6cb7e328f,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724868673745337090,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-glsln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4614e572-0a8f-409e-b964-cee6cb7e328f,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-28T18:10:55.328038371Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:39fa4be07dec384840fd5c205737b8295bfac59cd91eadb6b82755bf11bede1c,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-5pwgc,Uid:02fda28a-8802-4472-b46b-f6d10e680c6e,Namespac
e:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724868673683084751,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-5pwgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02fda28a-8802-4472-b46b-f6d10e680c6e,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-28T18:10:55.312334668Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:563b952d6c500b8243f8a5e7ca2008a44e0a7f93c7fcabf39c3c3cf7ac015cb8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-502283,Uid:96076efd91712895c130cee6f11cf9f5,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724868673416325590,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96076efd91712895c130cee6f11cf9f5,tier: control-plane,},Ann
otations:map[string]string{kubernetes.io/config.hash: 96076efd91712895c130cee6f11cf9f5,kubernetes.io/config.seen: 2024-08-28T18:10:43.651892434Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:654b750f207a79a2b6e3db8fec4305e25efec65c984ff1bdc6f54adb5807a859,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8d0e3f80-e073-40ab-a28a-6250bcf1b817,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724868673396892723,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e3f80-e073-40ab-a28a-6250bcf1b817,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage
-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-28T18:10:54.356724558Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4635991dad1dba88addb07f67ea81cdd74d314f883c8aeb2492365e91539900,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-502283,Uid:6b7eac52f7f4eea6227ce001622f7616,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724868673388982618,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-502283,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 6b7eac52f7f4eea6227ce001622f7616,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.140:8443,kubernetes.io/config.hash: 6b7eac52f7f4eea6227ce001622f7616,kubernetes.io/config.seen: 2024-08-28T18:10:43.651890065Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:54c039216ff1bdfdb90c2ccc8b860ec5dcf304bc8aa78733122cb261fdfd26e5,Metadata:&PodSandboxMetadata{Name:kube-proxy-v4cz8,Uid:414642eb-551c-40f0-bef1-ef01a1bd6f33,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724868673325211436,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-v4cz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 414642eb-551c-40f0-bef1-ef01a1bd6f33,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-28T18:10:55.287523680Z,kubernetes.
io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:86f648ebc9c6746b429b655a3bb98e050d36ad4c79425a09e5a3ec310a2a17ae,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-502283,Uid:b5463fd9ce22905bca19ea83bc2c9d4d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724868673303312403,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5463fd9ce22905bca19ea83bc2c9d4d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.140:2379,kubernetes.io/config.hash: b5463fd9ce22905bca19ea83bc2c9d4d,kubernetes.io/config.seen: 2024-08-28T18:10:43.651886129Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e96f46879267fda5c59717951de665c4e4f3349c2c07fa1a6689aa580f69b0d2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-502283,Uid:9f109f9805cdd51e2f6
dab6c14ef31ff,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1724868673156366781,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f109f9805cdd51e2f6dab6c14ef31ff,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9f109f9805cdd51e2f6dab6c14ef31ff,kubernetes.io/config.seen: 2024-08-28T18:10:43.651891352Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:241e8368560fa985dda8656a8831c8cd7f773cc3726783df37ed94935f28b00f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-502283,Uid:96076efd91712895c130cee6f11cf9f5,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724868669963268455,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-502283,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96076efd91712895c130cee6f11cf9f5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 96076efd91712895c130cee6f11cf9f5,kubernetes.io/config.seen: 2024-08-28T18:10:43.651892434Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5fd02bfce8c55b5e6a4b5b5b36145eb8b548da28bff8f12a43358db0921cebe0,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-502283,Uid:b5463fd9ce22905bca19ea83bc2c9d4d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724868669945342824,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5463fd9ce22905bca19ea83bc2c9d4d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.140:2379,kubernetes.io/config.hash: b5463fd9ce22905bca19ea83bc2c9d4d,kubernetes.io/config.seen
: 2024-08-28T18:10:43.651886129Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:27c478ec91d436fd5a7947b5dbb43bf14ab13ff989a4fa5e15505b58f6cdc75f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-502283,Uid:9f109f9805cdd51e2f6dab6c14ef31ff,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724868669934710045,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f109f9805cdd51e2f6dab6c14ef31ff,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9f109f9805cdd51e2f6dab6c14ef31ff,kubernetes.io/config.seen: 2024-08-28T18:10:43.651891352Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4ebe19e40d51e7c3c764c4ea2c7a15bc7269115e3fccd93ba3b8117f4b31ec59,Metadata:&PodSandboxMetadata{Name:kube-proxy-v4cz8,Uid:414642eb-551c-40f0-bef1-e
f01a1bd6f33,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724868669925745029,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-v4cz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 414642eb-551c-40f0-bef1-ef01a1bd6f33,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-28T18:10:55.287523680Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c4d416a4d721e0df68b457511adc7609bc522554965397438c9c5740c518d6e5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-502283,Uid:6b7eac52f7f4eea6227ce001622f7616,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1724868669897410941,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7e
ac52f7f4eea6227ce001622f7616,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.140:8443,kubernetes.io/config.hash: 6b7eac52f7f4eea6227ce001622f7616,kubernetes.io/config.seen: 2024-08-28T18:10:43.651890065Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9b2723fd-50e9-429e-bc1f-6e7e129f9166 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.651204159Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e411796e-092d-4e7e-bbdf-b420026a2352 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.651274257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e411796e-092d-4e7e-bbdf-b420026a2352 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.651717308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5baf19572c649666b604bec0be86404c7d35eb94a3f965503e89afd015383263,PodSandboxId:ebb2197aa54cc059a6a81b0c3580010c108a4b31ddbfa8a39db0370054101ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724868695738795186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-glsln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4614e572-0a8f-409e-b964-cee6cb7e328f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f67314f6c0a6cd58a06125a0f0a0a003d306d60fd13429c2e71a5b24263ecf,PodSandboxId:654b750f207a79a2b6e3db8fec4305e25efec65c984ff1bdc6f54adb5807a859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724868695725234057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 8d0e3f80-e073-40ab-a28a-6250bcf1b817,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9efe7eae54089ad44394ff64e5c1e5076632f8683c017c078c95af4dc6a3d14d,PodSandboxId:54c039216ff1bdfdb90c2ccc8b860ec5dcf304bc8aa78733122cb261fdfd26e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724868695724262319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v4cz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 414642eb-551c-40f0-bef1-ef01a1bd6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67456e52920e89865d4f031e316a23f93f5a12bcd3face32302bec5665a41678,PodSandboxId:39fa4be07dec384840fd5c205737b8295bfac59cd91eadb6b82755bf11bede1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724868695715279418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pwgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02fda28a-8802-4472-b46b-f6d
10e680c6e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb5589dc2c295d0b1f1cae587e743f3a08cad66b327cb379b7625afcdd9fa485,PodSandboxId:e96f46879267fda5c59717951de665c4e4f3349c2c07fa1a6689aa580f69b0d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172486869187
8145540,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f109f9805cdd51e2f6dab6c14ef31ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f77e5499ac5f8963948196fd07fa1449275bd58c47b254dab572cfd5ae01d8,PodSandboxId:563b952d6c500b8243f8a5e7ca2008a44e0a7f93c7fcabf39c3c3cf7ac015cb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt
:1724868691900728655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96076efd91712895c130cee6f11cf9f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb162ff6424aa1cb6cc4c5855aebfce53c86732dd7cb0566ab0b76fec5f6c2a,PodSandboxId:a4635991dad1dba88addb07f67ea81cdd74d314f883c8aeb2492365e91539900,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724
868691874265414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7eac52f7f4eea6227ce001622f7616,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e15e75f361238c3a98cb702d74ac58f499a3c57196c6bd1ba8d137b8d960da67,PodSandboxId:86f648ebc9c6746b429b655a3bb98e050d36ad4c79425a09e5a3ec310a2a17ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724868691866452217
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5463fd9ce22905bca19ea83bc2c9d4d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44aa9d4df94271f7fcdfea9471e79e90326b0b3f5027b7348418377676f4f7ca,PodSandboxId:654b750f207a79a2b6e3db8fec4305e25efec65c984ff1bdc6f54adb5807a859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724868687713560175,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e3f80-e073-40ab-a28a-6250bcf1b817,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453bd89ed2f30b49e86e0fc717094e1c3f47bad6bd1326d2c8b46c6f5878839b,PodSandboxId:39fa4be07dec384840fd5c205737b8295bfac59cd91eadb6b82755bf11bede1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724868674182336917,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pwgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02fda28a-8802-4472-b46b-f6d10e680c6e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f851d05034284a59e7ab87ec6b95b6321f194f19c14b6946fe7937d017cc6c3,PodSandboxId:ebb2197aa54cc059a6a81b0c3580010c108a4b31ddbfa8a39db0370054101ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724868674165542904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-glsln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4614e572-0a8f-409e-b964-cee6cb7e328f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff6fbc84257f6a2748f74d93548e999251f3ede50cb0e1e95106f2cc67b5375,PodSandboxId:4ebe19e40d51e7c3c764c4ea2c7a15bc7269115e3fccd93ba3b8117f4b31
ec59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724868670886681544,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v4cz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 414642eb-551c-40f0-bef1-ef01a1bd6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa011a3c7b8de0f98489e35f6aeaa84eb8a91224a6ca4cca2986e99e700eb6,PodSandboxId:241e8368560fa985dda8656a8831c8cd7f773cc3726783df37ed94935f28b00f,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724868670820352578,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96076efd91712895c130cee6f11cf9f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ff6bd4425ef6aa7d9a3469eeb6e49b6b9cf09f59fb66f252dd4d260f625ad3,PodSandboxId:27c478ec91d436fd5a7947b5dbb43bf14ab13ff989a4fa5e15505b58f6cdc75f,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724868670748794656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f109f9805cdd51e2f6dab6c14ef31ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080068210fdd7abebbcd1ef419bbac91b2d1bcad696752649d1cf7c3659964b4,PodSandboxId:5fd02bfce8c55b5e6a4b5b5b36145eb8b548da28bff8f12a43358db0921cebe0,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724868670796872099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5463fd9ce22905bca19ea83bc2c9d4d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654e1633c9f632c77116bac9a2a26c95d9c375c296cebc7517723eb3a2245eae,PodSandboxId:c4d416a4d721e0df68b457511adc7609bc522554965397438c9c5740c518d6e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724868670502282551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7eac52f7f4eea6227ce001622f7616,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e411796e-092d-4e7e-bbdf-b420026a2352 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.692995512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f049e98-d574-49ad-a909-418419467d37 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.693076183Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f049e98-d574-49ad-a909-418419467d37 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.697579995Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bae4ba76-96a5-4b73-baf3-debbaa545871 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.698022504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724868698697996148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bae4ba76-96a5-4b73-baf3-debbaa545871 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.700635556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01885570-107c-4948-81a4-ebb635e404f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.700735434Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01885570-107c-4948-81a4-ebb635e404f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.701256150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5baf19572c649666b604bec0be86404c7d35eb94a3f965503e89afd015383263,PodSandboxId:ebb2197aa54cc059a6a81b0c3580010c108a4b31ddbfa8a39db0370054101ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724868695738795186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-glsln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4614e572-0a8f-409e-b964-cee6cb7e328f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f67314f6c0a6cd58a06125a0f0a0a003d306d60fd13429c2e71a5b24263ecf,PodSandboxId:654b750f207a79a2b6e3db8fec4305e25efec65c984ff1bdc6f54adb5807a859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724868695725234057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 8d0e3f80-e073-40ab-a28a-6250bcf1b817,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9efe7eae54089ad44394ff64e5c1e5076632f8683c017c078c95af4dc6a3d14d,PodSandboxId:54c039216ff1bdfdb90c2ccc8b860ec5dcf304bc8aa78733122cb261fdfd26e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724868695724262319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v4cz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 414642eb-551c-40f0-bef1-ef01a1bd6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67456e52920e89865d4f031e316a23f93f5a12bcd3face32302bec5665a41678,PodSandboxId:39fa4be07dec384840fd5c205737b8295bfac59cd91eadb6b82755bf11bede1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724868695715279418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pwgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02fda28a-8802-4472-b46b-f6d
10e680c6e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb5589dc2c295d0b1f1cae587e743f3a08cad66b327cb379b7625afcdd9fa485,PodSandboxId:e96f46879267fda5c59717951de665c4e4f3349c2c07fa1a6689aa580f69b0d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172486869187
8145540,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f109f9805cdd51e2f6dab6c14ef31ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f77e5499ac5f8963948196fd07fa1449275bd58c47b254dab572cfd5ae01d8,PodSandboxId:563b952d6c500b8243f8a5e7ca2008a44e0a7f93c7fcabf39c3c3cf7ac015cb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt
:1724868691900728655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96076efd91712895c130cee6f11cf9f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb162ff6424aa1cb6cc4c5855aebfce53c86732dd7cb0566ab0b76fec5f6c2a,PodSandboxId:a4635991dad1dba88addb07f67ea81cdd74d314f883c8aeb2492365e91539900,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724
868691874265414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7eac52f7f4eea6227ce001622f7616,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e15e75f361238c3a98cb702d74ac58f499a3c57196c6bd1ba8d137b8d960da67,PodSandboxId:86f648ebc9c6746b429b655a3bb98e050d36ad4c79425a09e5a3ec310a2a17ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724868691866452217
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5463fd9ce22905bca19ea83bc2c9d4d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44aa9d4df94271f7fcdfea9471e79e90326b0b3f5027b7348418377676f4f7ca,PodSandboxId:654b750f207a79a2b6e3db8fec4305e25efec65c984ff1bdc6f54adb5807a859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724868687713560175,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e3f80-e073-40ab-a28a-6250bcf1b817,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453bd89ed2f30b49e86e0fc717094e1c3f47bad6bd1326d2c8b46c6f5878839b,PodSandboxId:39fa4be07dec384840fd5c205737b8295bfac59cd91eadb6b82755bf11bede1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724868674182336917,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pwgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02fda28a-8802-4472-b46b-f6d10e680c6e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f851d05034284a59e7ab87ec6b95b6321f194f19c14b6946fe7937d017cc6c3,PodSandboxId:ebb2197aa54cc059a6a81b0c3580010c108a4b31ddbfa8a39db0370054101ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724868674165542904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-glsln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4614e572-0a8f-409e-b964-cee6cb7e328f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff6fbc84257f6a2748f74d93548e999251f3ede50cb0e1e95106f2cc67b5375,PodSandboxId:4ebe19e40d51e7c3c764c4ea2c7a15bc7269115e3fccd93ba3b8117f4b31
ec59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724868670886681544,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v4cz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 414642eb-551c-40f0-bef1-ef01a1bd6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa011a3c7b8de0f98489e35f6aeaa84eb8a91224a6ca4cca2986e99e700eb6,PodSandboxId:241e8368560fa985dda8656a8831c8cd7f773cc3726783df37ed94935f28b00f,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724868670820352578,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96076efd91712895c130cee6f11cf9f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ff6bd4425ef6aa7d9a3469eeb6e49b6b9cf09f59fb66f252dd4d260f625ad3,PodSandboxId:27c478ec91d436fd5a7947b5dbb43bf14ab13ff989a4fa5e15505b58f6cdc75f,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724868670748794656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f109f9805cdd51e2f6dab6c14ef31ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080068210fdd7abebbcd1ef419bbac91b2d1bcad696752649d1cf7c3659964b4,PodSandboxId:5fd02bfce8c55b5e6a4b5b5b36145eb8b548da28bff8f12a43358db0921cebe0,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724868670796872099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5463fd9ce22905bca19ea83bc2c9d4d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654e1633c9f632c77116bac9a2a26c95d9c375c296cebc7517723eb3a2245eae,PodSandboxId:c4d416a4d721e0df68b457511adc7609bc522554965397438c9c5740c518d6e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724868670502282551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7eac52f7f4eea6227ce001622f7616,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01885570-107c-4948-81a4-ebb635e404f2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.750991293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b1809e6-a67c-4ba8-b0d5-2610cf90170f name=/runtime.v1.RuntimeService/Version
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.751086467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b1809e6-a67c-4ba8-b0d5-2610cf90170f name=/runtime.v1.RuntimeService/Version
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.752540283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5ff1635-1c7b-4631-b0ef-9611a1f4c784 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.752998939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724868698752970525,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5ff1635-1c7b-4631-b0ef-9611a1f4c784 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.753560496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0508ee1f-517a-4424-8622-40e1d7ecf1a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.753634801Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0508ee1f-517a-4424-8622-40e1d7ecf1a1 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:11:38 kubernetes-upgrade-502283 crio[2986]: time="2024-08-28 18:11:38.754056263Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5baf19572c649666b604bec0be86404c7d35eb94a3f965503e89afd015383263,PodSandboxId:ebb2197aa54cc059a6a81b0c3580010c108a4b31ddbfa8a39db0370054101ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724868695738795186,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-glsln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4614e572-0a8f-409e-b964-cee6cb7e328f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33f67314f6c0a6cd58a06125a0f0a0a003d306d60fd13429c2e71a5b24263ecf,PodSandboxId:654b750f207a79a2b6e3db8fec4305e25efec65c984ff1bdc6f54adb5807a859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724868695725234057,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 8d0e3f80-e073-40ab-a28a-6250bcf1b817,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9efe7eae54089ad44394ff64e5c1e5076632f8683c017c078c95af4dc6a3d14d,PodSandboxId:54c039216ff1bdfdb90c2ccc8b860ec5dcf304bc8aa78733122cb261fdfd26e5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724868695724262319,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v4cz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 414642eb-551c-40f0-bef1-ef01a1bd6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67456e52920e89865d4f031e316a23f93f5a12bcd3face32302bec5665a41678,PodSandboxId:39fa4be07dec384840fd5c205737b8295bfac59cd91eadb6b82755bf11bede1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724868695715279418,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pwgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02fda28a-8802-4472-b46b-f6d
10e680c6e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb5589dc2c295d0b1f1cae587e743f3a08cad66b327cb379b7625afcdd9fa485,PodSandboxId:e96f46879267fda5c59717951de665c4e4f3349c2c07fa1a6689aa580f69b0d2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:172486869187
8145540,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f109f9805cdd51e2f6dab6c14ef31ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f77e5499ac5f8963948196fd07fa1449275bd58c47b254dab572cfd5ae01d8,PodSandboxId:563b952d6c500b8243f8a5e7ca2008a44e0a7f93c7fcabf39c3c3cf7ac015cb8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt
:1724868691900728655,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96076efd91712895c130cee6f11cf9f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb162ff6424aa1cb6cc4c5855aebfce53c86732dd7cb0566ab0b76fec5f6c2a,PodSandboxId:a4635991dad1dba88addb07f67ea81cdd74d314f883c8aeb2492365e91539900,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724
868691874265414,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7eac52f7f4eea6227ce001622f7616,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e15e75f361238c3a98cb702d74ac58f499a3c57196c6bd1ba8d137b8d960da67,PodSandboxId:86f648ebc9c6746b429b655a3bb98e050d36ad4c79425a09e5a3ec310a2a17ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724868691866452217
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5463fd9ce22905bca19ea83bc2c9d4d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44aa9d4df94271f7fcdfea9471e79e90326b0b3f5027b7348418377676f4f7ca,PodSandboxId:654b750f207a79a2b6e3db8fec4305e25efec65c984ff1bdc6f54adb5807a859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724868687713560175,Labels:map[string]st
ring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0e3f80-e073-40ab-a28a-6250bcf1b817,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:453bd89ed2f30b49e86e0fc717094e1c3f47bad6bd1326d2c8b46c6f5878839b,PodSandboxId:39fa4be07dec384840fd5c205737b8295bfac59cd91eadb6b82755bf11bede1c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724868674182336917,Labels:map[string]string{io.kubernetes.contai
ner.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-5pwgc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 02fda28a-8802-4472-b46b-f6d10e680c6e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f851d05034284a59e7ab87ec6b95b6321f194f19c14b6946fe7937d017cc6c3,PodSandboxId:ebb2197aa54cc059a6a81b0c3580010c108a4b31ddbfa8a39db0370054101ce8,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedIma
ge:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1724868674165542904,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-glsln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4614e572-0a8f-409e-b964-cee6cb7e328f,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bff6fbc84257f6a2748f74d93548e999251f3ede50cb0e1e95106f2cc67b5375,PodSandboxId:4ebe19e40d51e7c3c764c4ea2c7a15bc7269115e3fccd93ba3b8117f4b31
ec59,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1724868670886681544,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-v4cz8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 414642eb-551c-40f0-bef1-ef01a1bd6f33,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ffa011a3c7b8de0f98489e35f6aeaa84eb8a91224a6ca4cca2986e99e700eb6,PodSandboxId:241e8368560fa985dda8656a8831c8cd7f773cc3726783df37ed94935f28b00f,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1724868670820352578,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96076efd91712895c130cee6f11cf9f5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ff6bd4425ef6aa7d9a3469eeb6e49b6b9cf09f59fb66f252dd4d260f625ad3,PodSandboxId:27c478ec91d436fd5a7947b5dbb43bf14ab13ff989a4fa5e15505b58f6cdc75f,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1724868670748794656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f109f9805cdd51e2f6dab6c14ef31ff,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:080068210fdd7abebbcd1ef419bbac91b2d1bcad696752649d1cf7c3659964b4,PodSandboxId:5fd02bfce8c55b5e6a4b5b5b36145eb8b548da28bff8f12a43358db0921cebe0,Metadata:&Conta
inerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1724868670796872099,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5463fd9ce22905bca19ea83bc2c9d4d,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:654e1633c9f632c77116bac9a2a26c95d9c375c296cebc7517723eb3a2245eae,PodSandboxId:c4d416a4d721e0df68b457511adc7609bc522554965397438c9c5740c518d6e5,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724868670502282551,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-502283,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b7eac52f7f4eea6227ce001622f7616,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0508ee1f-517a-4424-8622-40e1d7ecf1a1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5baf19572c649       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   ebb2197aa54cc       coredns-6f6b679f8f-glsln
	33f67314f6c0a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   654b750f207a7       storage-provisioner
	9efe7eae54089       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   3 seconds ago       Running             kube-proxy                2                   54c039216ff1b       kube-proxy-v4cz8
	67456e52920e8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   39fa4be07dec3       coredns-6f6b679f8f-5pwgc
	45f77e5499ac5       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   6 seconds ago       Running             kube-scheduler            2                   563b952d6c500       kube-scheduler-kubernetes-upgrade-502283
	eb5589dc2c295       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   6 seconds ago       Running             kube-controller-manager   2                   e96f46879267f       kube-controller-manager-kubernetes-upgrade-502283
	ceb162ff6424a       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   6 seconds ago       Running             kube-apiserver            2                   a4635991dad1d       kube-apiserver-kubernetes-upgrade-502283
	e15e75f361238       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   6 seconds ago       Running             etcd                      2                   86f648ebc9c67       etcd-kubernetes-upgrade-502283
	44aa9d4df9427       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Exited              storage-provisioner       2                   654b750f207a7       storage-provisioner
	453bd89ed2f30       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   39fa4be07dec3       coredns-6f6b679f8f-5pwgc
	1f851d0503428       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   24 seconds ago      Exited              coredns                   1                   ebb2197aa54cc       coredns-6f6b679f8f-glsln
	bff6fbc84257f       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   27 seconds ago      Exited              kube-proxy                1                   4ebe19e40d51e       kube-proxy-v4cz8
	5ffa011a3c7b8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   28 seconds ago      Exited              kube-scheduler            1                   241e8368560fa       kube-scheduler-kubernetes-upgrade-502283
	080068210fdd7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   28 seconds ago      Exited              etcd                      1                   5fd02bfce8c55       etcd-kubernetes-upgrade-502283
	c6ff6bd4425ef       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   28 seconds ago      Exited              kube-controller-manager   1                   27c478ec91d43       kube-controller-manager-kubernetes-upgrade-502283
	654e1633c9f63       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   28 seconds ago      Exited              kube-apiserver            1                   c4d416a4d721e       kube-apiserver-kubernetes-upgrade-502283
	
	
	==> coredns [1f851d05034284a59e7ab87ec6b95b6321f194f19c14b6946fe7937d017cc6c3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [453bd89ed2f30b49e86e0fc717094e1c3f47bad6bd1326d2c8b46c6f5878839b] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5baf19572c649666b604bec0be86404c7d35eb94a3f965503e89afd015383263] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [67456e52920e89865d4f031e316a23f93f5a12bcd3face32302bec5665a41678] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-502283
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-502283
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 18:10:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-502283
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 18:11:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 18:11:35 +0000   Wed, 28 Aug 2024 18:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 18:11:35 +0000   Wed, 28 Aug 2024 18:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 18:11:35 +0000   Wed, 28 Aug 2024 18:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 18:11:35 +0000   Wed, 28 Aug 2024 18:10:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.140
	  Hostname:    kubernetes-upgrade-502283
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 13760adf510242b58d7b37eacbd6ebda
	  System UUID:                13760adf-5102-42b5-8d7b-37eacbd6ebda
	  Boot ID:                    1e86ed0d-bd59-44bf-9128-ddb968723402
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-5pwgc                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     44s
	  kube-system                 coredns-6f6b679f8f-glsln                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     44s
	  kube-system                 etcd-kubernetes-upgrade-502283                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         51s
	  kube-system                 kube-apiserver-kubernetes-upgrade-502283             250m (12%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-502283    200m (10%)    0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-proxy-v4cz8                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-scheduler-kubernetes-upgrade-502283             100m (5%)     0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 56s)  kubelet          Node kubernetes-upgrade-502283 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     55s (x7 over 56s)  kubelet          Node kubernetes-upgrade-502283 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    55s (x8 over 56s)  kubelet          Node kubernetes-upgrade-502283 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           45s                node-controller  Node kubernetes-upgrade-502283 event: Registered Node kubernetes-upgrade-502283 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-502283 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-502283 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-502283 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-502283 event: Registered Node kubernetes-upgrade-502283 in Controller
	
	
	==> dmesg <==
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.651303] systemd-fstab-generator[553]: Ignoring "noauto" option for root device
	[  +0.079095] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066561] systemd-fstab-generator[565]: Ignoring "noauto" option for root device
	[  +0.181608] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.179327] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.334779] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +4.332814] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +0.066685] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.426522] systemd-fstab-generator[834]: Ignoring "noauto" option for root device
	[  +6.652810] systemd-fstab-generator[1226]: Ignoring "noauto" option for root device
	[  +0.107398] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.523009] kauditd_printk_skb: 18 callbacks suppressed
	[Aug28 18:11] systemd-fstab-generator[2170]: Ignoring "noauto" option for root device
	[  +0.094263] kauditd_printk_skb: 83 callbacks suppressed
	[  +0.083819] systemd-fstab-generator[2182]: Ignoring "noauto" option for root device
	[  +0.504274] systemd-fstab-generator[2361]: Ignoring "noauto" option for root device
	[  +0.389808] systemd-fstab-generator[2471]: Ignoring "noauto" option for root device
	[  +0.962113] systemd-fstab-generator[2844]: Ignoring "noauto" option for root device
	[  +1.427629] systemd-fstab-generator[3181]: Ignoring "noauto" option for root device
	[ +11.397846] kauditd_printk_skb: 300 callbacks suppressed
	[  +6.539924] systemd-fstab-generator[3955]: Ignoring "noauto" option for root device
	[  +5.233865] kauditd_printk_skb: 64 callbacks suppressed
	[  +0.617847] systemd-fstab-generator[4495]: Ignoring "noauto" option for root device
	
	
	==> etcd [080068210fdd7abebbcd1ef419bbac91b2d1bcad696752649d1cf7c3659964b4] <==
	{"level":"info","ts":"2024-08-28T18:11:11.569564Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-28T18:11:11.642479Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"77a8f052fa5fccd4","local-member-id":"85ea5ca067fb3fe3","commit-index":405}
	{"level":"info","ts":"2024-08-28T18:11:11.708177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-28T18:11:11.711321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became follower at term 2"}
	{"level":"info","ts":"2024-08-28T18:11:11.711636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 85ea5ca067fb3fe3 [peers: [], term: 2, commit: 405, applied: 0, lastindex: 405, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-28T18:11:11.723029Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-28T18:11:11.774638Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":397}
	{"level":"info","ts":"2024-08-28T18:11:11.783946Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-28T18:11:11.788599Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"85ea5ca067fb3fe3","timeout":"7s"}
	{"level":"info","ts":"2024-08-28T18:11:11.789550Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"85ea5ca067fb3fe3"}
	{"level":"info","ts":"2024-08-28T18:11:11.790656Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"85ea5ca067fb3fe3","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-28T18:11:11.794423Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-28T18:11:11.798456Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-28T18:11:11.809901Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-28T18:11:11.812904Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-28T18:11:11.803558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 switched to configuration voters=(9649626995603750883)"}
	{"level":"info","ts":"2024-08-28T18:11:11.815162Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"77a8f052fa5fccd4","local-member-id":"85ea5ca067fb3fe3","added-peer-id":"85ea5ca067fb3fe3","added-peer-peer-urls":["https://192.168.50.140:2380"]}
	{"level":"info","ts":"2024-08-28T18:11:11.816096Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"77a8f052fa5fccd4","local-member-id":"85ea5ca067fb3fe3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T18:11:11.816197Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T18:11:11.809649Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:11:11.868135Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-28T18:11:11.868201Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.140:2380"}
	{"level":"info","ts":"2024-08-28T18:11:11.868212Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.140:2380"}
	{"level":"info","ts":"2024-08-28T18:11:11.869253Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"85ea5ca067fb3fe3","initial-advertise-peer-urls":["https://192.168.50.140:2380"],"listen-peer-urls":["https://192.168.50.140:2380"],"advertise-client-urls":["https://192.168.50.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-28T18:11:11.869279Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	
	==> etcd [e15e75f361238c3a98cb702d74ac58f499a3c57196c6bd1ba8d137b8d960da67] <==
	{"level":"info","ts":"2024-08-28T18:11:32.359251Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-28T18:11:32.360921Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-28T18:11:32.360979Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-28T18:11:32.362157Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:11:32.371799Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-28T18:11:32.373859Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.140:2380"}
	{"level":"info","ts":"2024-08-28T18:11:32.374034Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.140:2380"}
	{"level":"info","ts":"2024-08-28T18:11:32.380060Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-28T18:11:32.379996Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"85ea5ca067fb3fe3","initial-advertise-peer-urls":["https://192.168.50.140:2380"],"listen-peer-urls":["https://192.168.50.140:2380"],"advertise-client-urls":["https://192.168.50.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-28T18:11:33.614896Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-28T18:11:33.615048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-28T18:11:33.615111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 received MsgPreVoteResp from 85ea5ca067fb3fe3 at term 2"}
	{"level":"info","ts":"2024-08-28T18:11:33.615154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became candidate at term 3"}
	{"level":"info","ts":"2024-08-28T18:11:33.615191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 received MsgVoteResp from 85ea5ca067fb3fe3 at term 3"}
	{"level":"info","ts":"2024-08-28T18:11:33.615224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"85ea5ca067fb3fe3 became leader at term 3"}
	{"level":"info","ts":"2024-08-28T18:11:33.615256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 85ea5ca067fb3fe3 elected leader 85ea5ca067fb3fe3 at term 3"}
	{"level":"info","ts":"2024-08-28T18:11:33.622732Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"85ea5ca067fb3fe3","local-member-attributes":"{Name:kubernetes-upgrade-502283 ClientURLs:[https://192.168.50.140:2379]}","request-path":"/0/members/85ea5ca067fb3fe3/attributes","cluster-id":"77a8f052fa5fccd4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T18:11:33.624847Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T18:11:33.625511Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T18:11:33.626607Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:11:33.642213Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.140:2379"}
	{"level":"info","ts":"2024-08-28T18:11:33.626619Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:11:33.635212Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T18:11:33.652996Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T18:11:33.662539Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:11:39 up 1 min,  0 users,  load average: 1.19, 0.33, 0.11
	Linux kubernetes-upgrade-502283 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [654e1633c9f632c77116bac9a2a26c95d9c375c296cebc7517723eb3a2245eae] <==
	I0828 18:11:11.513719       1 options.go:228] external host was not specified, using 192.168.50.140
	I0828 18:11:11.526230       1 server.go:142] Version: v1.31.0
	I0828 18:11:11.526296       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [ceb162ff6424aa1cb6cc4c5855aebfce53c86732dd7cb0566ab0b76fec5f6c2a] <==
	I0828 18:11:34.992795       1 shared_informer.go:320] Caches are synced for configmaps
	I0828 18:11:35.004371       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0828 18:11:35.004552       1 aggregator.go:171] initial CRD sync complete...
	I0828 18:11:35.004594       1 autoregister_controller.go:144] Starting autoregister controller
	I0828 18:11:35.004616       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0828 18:11:35.004638       1 cache.go:39] Caches are synced for autoregister controller
	I0828 18:11:35.059397       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0828 18:11:35.065662       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0828 18:11:35.065696       1 policy_source.go:224] refreshing policies
	I0828 18:11:35.090550       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0828 18:11:35.090610       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0828 18:11:35.090618       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0828 18:11:35.090734       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0828 18:11:35.091712       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0828 18:11:35.091891       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0828 18:11:35.094517       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0828 18:11:35.096697       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0828 18:11:35.954871       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0828 18:11:36.684778       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0828 18:11:36.702299       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0828 18:11:36.755520       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0828 18:11:36.818157       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0828 18:11:36.828125       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0828 18:11:37.717737       1 controller.go:615] quota admission added evaluator for: endpoints
	I0828 18:11:38.684319       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c6ff6bd4425ef6aa7d9a3469eeb6e49b6b9cf09f59fb66f252dd4d260f625ad3] <==
	
	
	==> kube-controller-manager [eb5589dc2c295d0b1f1cae587e743f3a08cad66b327cb379b7625afcdd9fa485] <==
	I0828 18:11:38.305598       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0828 18:11:38.318064       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0828 18:11:38.327905       1 shared_informer.go:320] Caches are synced for persistent volume
	I0828 18:11:38.328106       1 shared_informer.go:320] Caches are synced for ephemeral
	I0828 18:11:38.327971       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0828 18:11:38.327982       1 shared_informer.go:320] Caches are synced for deployment
	I0828 18:11:38.337494       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0828 18:11:38.340634       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0828 18:11:38.341161       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-502283"
	I0828 18:11:38.345909       1 shared_informer.go:320] Caches are synced for job
	I0828 18:11:38.347006       1 shared_informer.go:320] Caches are synced for endpoint
	I0828 18:11:38.359434       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0828 18:11:38.360605       1 shared_informer.go:320] Caches are synced for cronjob
	I0828 18:11:38.376061       1 shared_informer.go:320] Caches are synced for GC
	I0828 18:11:38.376719       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0828 18:11:38.425939       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0828 18:11:38.437488       1 shared_informer.go:320] Caches are synced for resource quota
	I0828 18:11:38.453695       1 shared_informer.go:320] Caches are synced for resource quota
	I0828 18:11:38.526890       1 shared_informer.go:320] Caches are synced for attach detach
	I0828 18:11:38.526902       1 shared_informer.go:320] Caches are synced for disruption
	I0828 18:11:38.590166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="261.798995ms"
	I0828 18:11:38.590361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="111.859µs"
	I0828 18:11:38.982854       1 shared_informer.go:320] Caches are synced for garbage collector
	I0828 18:11:38.997953       1 shared_informer.go:320] Caches are synced for garbage collector
	I0828 18:11:38.997995       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [9efe7eae54089ad44394ff64e5c1e5076632f8683c017c078c95af4dc6a3d14d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 18:11:36.070169       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 18:11:36.082296       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.140"]
	E0828 18:11:36.082354       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 18:11:36.145639       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 18:11:36.145698       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 18:11:36.145730       1 server_linux.go:169] "Using iptables Proxier"
	I0828 18:11:36.148022       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 18:11:36.148267       1 server.go:483] "Version info" version="v1.31.0"
	I0828 18:11:36.148296       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:11:36.149758       1 config.go:197] "Starting service config controller"
	I0828 18:11:36.149787       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 18:11:36.149850       1 config.go:104] "Starting endpoint slice config controller"
	I0828 18:11:36.149855       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 18:11:36.150237       1 config.go:326] "Starting node config controller"
	I0828 18:11:36.150254       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 18:11:36.250880       1 shared_informer.go:320] Caches are synced for node config
	I0828 18:11:36.250920       1 shared_informer.go:320] Caches are synced for service config
	I0828 18:11:36.250951       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [bff6fbc84257f6a2748f74d93548e999251f3ede50cb0e1e95106f2cc67b5375] <==
	
	
	==> kube-scheduler [45f77e5499ac5f8963948196fd07fa1449275bd58c47b254dab572cfd5ae01d8] <==
	I0828 18:11:33.220625       1 serving.go:386] Generated self-signed cert in-memory
	W0828 18:11:34.949525       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 18:11:34.949667       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 18:11:34.949700       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 18:11:34.949783       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 18:11:35.013186       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0828 18:11:35.013490       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:11:35.016667       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 18:11:35.016734       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 18:11:35.017168       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0828 18:11:35.017413       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0828 18:11:35.117729       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [5ffa011a3c7b8de0f98489e35f6aeaa84eb8a91224a6ca4cca2986e99e700eb6] <==
	
	
	==> kubelet <==
	Aug 28 18:11:31 kubernetes-upgrade-502283 kubelet[3962]: E0828 18:11:31.628742    3962 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.140:8443: connect: connection refused" node="kubernetes-upgrade-502283"
	Aug 28 18:11:31 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:31.830632    3962 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-502283"
	Aug 28 18:11:31 kubernetes-upgrade-502283 kubelet[3962]: E0828 18:11:31.831736    3962 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.140:8443: connect: connection refused" node="kubernetes-upgrade-502283"
	Aug 28 18:11:31 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:31.847485    3962 scope.go:117] "RemoveContainer" containerID="080068210fdd7abebbcd1ef419bbac91b2d1bcad696752649d1cf7c3659964b4"
	Aug 28 18:11:31 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:31.848332    3962 scope.go:117] "RemoveContainer" containerID="654e1633c9f632c77116bac9a2a26c95d9c375c296cebc7517723eb3a2245eae"
	Aug 28 18:11:31 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:31.850388    3962 scope.go:117] "RemoveContainer" containerID="c6ff6bd4425ef6aa7d9a3469eeb6e49b6b9cf09f59fb66f252dd4d260f625ad3"
	Aug 28 18:11:31 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:31.853215    3962 scope.go:117] "RemoveContainer" containerID="5ffa011a3c7b8de0f98489e35f6aeaa84eb8a91224a6ca4cca2986e99e700eb6"
	Aug 28 18:11:31 kubernetes-upgrade-502283 kubelet[3962]: E0828 18:11:31.978031    3962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-502283?timeout=10s\": dial tcp 192.168.50.140:8443: connect: connection refused" interval="800ms"
	Aug 28 18:11:32 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:32.233262    3962 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-502283"
	Aug 28 18:11:32 kubernetes-upgrade-502283 kubelet[3962]: E0828 18:11:32.234243    3962 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.140:8443: connect: connection refused" node="kubernetes-upgrade-502283"
	Aug 28 18:11:33 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:33.036502    3962 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-502283"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.158730    3962 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-502283"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.158877    3962 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-502283"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.158900    3962 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.159903    3962 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: E0828 18:11:35.205983    3962 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-kubernetes-upgrade-502283\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-502283"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.338860    3962 apiserver.go:52] "Watching apiserver"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.366479    3962 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.385968    3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/414642eb-551c-40f0-bef1-ef01a1bd6f33-lib-modules\") pod \"kube-proxy-v4cz8\" (UID: \"414642eb-551c-40f0-bef1-ef01a1bd6f33\") " pod="kube-system/kube-proxy-v4cz8"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.386025    3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8d0e3f80-e073-40ab-a28a-6250bcf1b817-tmp\") pod \"storage-provisioner\" (UID: \"8d0e3f80-e073-40ab-a28a-6250bcf1b817\") " pod="kube-system/storage-provisioner"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.386090    3962 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/414642eb-551c-40f0-bef1-ef01a1bd6f33-xtables-lock\") pod \"kube-proxy-v4cz8\" (UID: \"414642eb-551c-40f0-bef1-ef01a1bd6f33\") " pod="kube-system/kube-proxy-v4cz8"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.643274    3962 scope.go:117] "RemoveContainer" containerID="44aa9d4df94271f7fcdfea9471e79e90326b0b3f5027b7348418377676f4f7ca"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.643660    3962 scope.go:117] "RemoveContainer" containerID="bff6fbc84257f6a2748f74d93548e999251f3ede50cb0e1e95106f2cc67b5375"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.643852    3962 scope.go:117] "RemoveContainer" containerID="1f851d05034284a59e7ab87ec6b95b6321f194f19c14b6946fe7937d017cc6c3"
	Aug 28 18:11:35 kubernetes-upgrade-502283 kubelet[3962]: I0828 18:11:35.644107    3962 scope.go:117] "RemoveContainer" containerID="453bd89ed2f30b49e86e0fc717094e1c3f47bad6bd1326d2c8b46c6f5878839b"
	
	
	==> storage-provisioner [33f67314f6c0a6cd58a06125a0f0a0a003d306d60fd13429c2e71a5b24263ecf] <==
	I0828 18:11:35.964179       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 18:11:36.004726       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 18:11:36.004800       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [44aa9d4df94271f7fcdfea9471e79e90326b0b3f5027b7348418377676f4f7ca] <==
	I0828 18:11:27.794525       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0828 18:11:27.796549       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-502283 -n kubernetes-upgrade-502283
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-502283 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-502283" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-502283
--- FAIL: TestKubernetesUpgrade (401.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (303.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-131737 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-131737 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m2.8964796s)

                                                
                                                
-- stdout --
	* [old-k8s-version-131737] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-131737" primary control-plane node in "old-k8s-version-131737" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:11:41.009716   69228 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:11:41.009970   69228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:11:41.009979   69228 out.go:358] Setting ErrFile to fd 2...
	I0828 18:11:41.009983   69228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:11:41.010174   69228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:11:41.010737   69228 out.go:352] Setting JSON to false
	I0828 18:11:41.011623   69228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6847,"bootTime":1724861854,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:11:41.011682   69228 start.go:139] virtualization: kvm guest
	I0828 18:11:41.013814   69228 out.go:177] * [old-k8s-version-131737] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:11:41.015089   69228 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:11:41.015089   69228 notify.go:220] Checking for updates...
	I0828 18:11:41.017215   69228 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:11:41.018360   69228 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:11:41.019431   69228 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:11:41.020535   69228 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:11:41.021805   69228 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:11:41.023483   69228 config.go:182] Loaded profile config "bridge-647068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:11:41.023584   69228 config.go:182] Loaded profile config "enable-default-cni-647068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:11:41.023679   69228 config.go:182] Loaded profile config "flannel-647068": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:11:41.023788   69228 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:11:41.060828   69228 out.go:177] * Using the kvm2 driver based on user configuration
	I0828 18:11:41.062027   69228 start.go:297] selected driver: kvm2
	I0828 18:11:41.062043   69228 start.go:901] validating driver "kvm2" against <nil>
	I0828 18:11:41.062053   69228 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:11:41.062795   69228 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:11:41.062859   69228 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:11:41.077621   69228 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:11:41.077670   69228 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 18:11:41.077902   69228 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:11:41.077971   69228 cni.go:84] Creating CNI manager for ""
	I0828 18:11:41.077987   69228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:11:41.077998   69228 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 18:11:41.078053   69228 start.go:340] cluster config:
	{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:11:41.078198   69228 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:11:41.079959   69228 out.go:177] * Starting "old-k8s-version-131737" primary control-plane node in "old-k8s-version-131737" cluster
	I0828 18:11:41.081144   69228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:11:41.081221   69228 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:11:41.081233   69228 cache.go:56] Caching tarball of preloaded images
	I0828 18:11:41.081327   69228 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:11:41.081339   69228 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0828 18:11:41.081430   69228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:11:41.081453   69228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json: {Name:mk131fdefb68c7b808e0f4120814129ae0fd6a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:11:41.081644   69228 start.go:360] acquireMachinesLock for old-k8s-version-131737: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:12:09.582753   69228 start.go:364] duration metric: took 28.501068548s to acquireMachinesLock for "old-k8s-version-131737"
	I0828 18:12:09.582814   69228 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:12:09.582976   69228 start.go:125] createHost starting for "" (driver="kvm2")
	I0828 18:12:09.676353   69228 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 18:12:09.676602   69228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:12:09.676656   69228 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:12:09.692115   69228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0828 18:12:09.692622   69228 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:12:09.693245   69228 main.go:141] libmachine: Using API Version  1
	I0828 18:12:09.693272   69228 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:12:09.693594   69228 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:12:09.693827   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:12:09.694028   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:12:09.694210   69228 start.go:159] libmachine.API.Create for "old-k8s-version-131737" (driver="kvm2")
	I0828 18:12:09.694241   69228 client.go:168] LocalClient.Create starting
	I0828 18:12:09.694275   69228 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 18:12:09.694325   69228 main.go:141] libmachine: Decoding PEM data...
	I0828 18:12:09.694347   69228 main.go:141] libmachine: Parsing certificate...
	I0828 18:12:09.694419   69228 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 18:12:09.694440   69228 main.go:141] libmachine: Decoding PEM data...
	I0828 18:12:09.694450   69228 main.go:141] libmachine: Parsing certificate...
	I0828 18:12:09.694468   69228 main.go:141] libmachine: Running pre-create checks...
	I0828 18:12:09.694476   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .PreCreateCheck
	I0828 18:12:09.694848   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetConfigRaw
	I0828 18:12:09.695258   69228 main.go:141] libmachine: Creating machine...
	I0828 18:12:09.695271   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .Create
	I0828 18:12:09.695475   69228 main.go:141] libmachine: (old-k8s-version-131737) Creating KVM machine...
	I0828 18:12:09.696820   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found existing default KVM network
	I0828 18:12:09.698237   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:09.698044   69681 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:9c:a1:16} reservation:<nil>}
	I0828 18:12:09.699381   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:09.699294   69681 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002da1d0}
	I0828 18:12:09.699422   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | created network xml: 
	I0828 18:12:09.699430   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | <network>
	I0828 18:12:09.699442   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG |   <name>mk-old-k8s-version-131737</name>
	I0828 18:12:09.699450   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG |   <dns enable='no'/>
	I0828 18:12:09.699456   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG |   
	I0828 18:12:09.699463   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0828 18:12:09.699472   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG |     <dhcp>
	I0828 18:12:09.699479   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0828 18:12:09.699490   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG |     </dhcp>
	I0828 18:12:09.699540   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG |   </ip>
	I0828 18:12:09.699575   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG |   
	I0828 18:12:09.699586   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | </network>
	I0828 18:12:09.699594   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | 
	I0828 18:12:09.825524   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | trying to create private KVM network mk-old-k8s-version-131737 192.168.50.0/24...
	I0828 18:12:09.902207   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | private KVM network mk-old-k8s-version-131737 192.168.50.0/24 created
	I0828 18:12:09.902317   69228 main.go:141] libmachine: (old-k8s-version-131737) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737 ...
	I0828 18:12:09.902358   69228 main.go:141] libmachine: (old-k8s-version-131737) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 18:12:09.902373   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:09.902301   69681 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:12:09.902594   69228 main.go:141] libmachine: (old-k8s-version-131737) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 18:12:10.177174   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:10.177050   69681 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa...
	I0828 18:12:10.325436   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:10.325303   69681 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/old-k8s-version-131737.rawdisk...
	I0828 18:12:10.325471   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Writing magic tar header
	I0828 18:12:10.325485   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Writing SSH key tar header
	I0828 18:12:10.325493   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:10.325458   69681 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737 ...
	I0828 18:12:10.325597   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737
	I0828 18:12:10.325633   69228 main.go:141] libmachine: (old-k8s-version-131737) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737 (perms=drwx------)
	I0828 18:12:10.325647   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 18:12:10.325745   69228 main.go:141] libmachine: (old-k8s-version-131737) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 18:12:10.325814   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:12:10.325826   69228 main.go:141] libmachine: (old-k8s-version-131737) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 18:12:10.325847   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 18:12:10.325867   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 18:12:10.325880   69228 main.go:141] libmachine: (old-k8s-version-131737) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 18:12:10.325891   69228 main.go:141] libmachine: (old-k8s-version-131737) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 18:12:10.325904   69228 main.go:141] libmachine: (old-k8s-version-131737) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 18:12:10.325916   69228 main.go:141] libmachine: (old-k8s-version-131737) Creating domain...
	I0828 18:12:10.325929   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Checking permissions on dir: /home/jenkins
	I0828 18:12:10.325939   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Checking permissions on dir: /home
	I0828 18:12:10.325950   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Skipping /home - not owner
	I0828 18:12:10.327166   69228 main.go:141] libmachine: (old-k8s-version-131737) define libvirt domain using xml: 
	I0828 18:12:10.327193   69228 main.go:141] libmachine: (old-k8s-version-131737) <domain type='kvm'>
	I0828 18:12:10.327210   69228 main.go:141] libmachine: (old-k8s-version-131737)   <name>old-k8s-version-131737</name>
	I0828 18:12:10.327217   69228 main.go:141] libmachine: (old-k8s-version-131737)   <memory unit='MiB'>2200</memory>
	I0828 18:12:10.327223   69228 main.go:141] libmachine: (old-k8s-version-131737)   <vcpu>2</vcpu>
	I0828 18:12:10.327228   69228 main.go:141] libmachine: (old-k8s-version-131737)   <features>
	I0828 18:12:10.327239   69228 main.go:141] libmachine: (old-k8s-version-131737)     <acpi/>
	I0828 18:12:10.327244   69228 main.go:141] libmachine: (old-k8s-version-131737)     <apic/>
	I0828 18:12:10.327258   69228 main.go:141] libmachine: (old-k8s-version-131737)     <pae/>
	I0828 18:12:10.327270   69228 main.go:141] libmachine: (old-k8s-version-131737)     
	I0828 18:12:10.327280   69228 main.go:141] libmachine: (old-k8s-version-131737)   </features>
	I0828 18:12:10.327286   69228 main.go:141] libmachine: (old-k8s-version-131737)   <cpu mode='host-passthrough'>
	I0828 18:12:10.327294   69228 main.go:141] libmachine: (old-k8s-version-131737)   
	I0828 18:12:10.327304   69228 main.go:141] libmachine: (old-k8s-version-131737)   </cpu>
	I0828 18:12:10.327332   69228 main.go:141] libmachine: (old-k8s-version-131737)   <os>
	I0828 18:12:10.327355   69228 main.go:141] libmachine: (old-k8s-version-131737)     <type>hvm</type>
	I0828 18:12:10.327370   69228 main.go:141] libmachine: (old-k8s-version-131737)     <boot dev='cdrom'/>
	I0828 18:12:10.327382   69228 main.go:141] libmachine: (old-k8s-version-131737)     <boot dev='hd'/>
	I0828 18:12:10.327395   69228 main.go:141] libmachine: (old-k8s-version-131737)     <bootmenu enable='no'/>
	I0828 18:12:10.327406   69228 main.go:141] libmachine: (old-k8s-version-131737)   </os>
	I0828 18:12:10.327417   69228 main.go:141] libmachine: (old-k8s-version-131737)   <devices>
	I0828 18:12:10.327433   69228 main.go:141] libmachine: (old-k8s-version-131737)     <disk type='file' device='cdrom'>
	I0828 18:12:10.327461   69228 main.go:141] libmachine: (old-k8s-version-131737)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/boot2docker.iso'/>
	I0828 18:12:10.327473   69228 main.go:141] libmachine: (old-k8s-version-131737)       <target dev='hdc' bus='scsi'/>
	I0828 18:12:10.327482   69228 main.go:141] libmachine: (old-k8s-version-131737)       <readonly/>
	I0828 18:12:10.327492   69228 main.go:141] libmachine: (old-k8s-version-131737)     </disk>
	I0828 18:12:10.327502   69228 main.go:141] libmachine: (old-k8s-version-131737)     <disk type='file' device='disk'>
	I0828 18:12:10.327514   69228 main.go:141] libmachine: (old-k8s-version-131737)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 18:12:10.327536   69228 main.go:141] libmachine: (old-k8s-version-131737)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/old-k8s-version-131737.rawdisk'/>
	I0828 18:12:10.327548   69228 main.go:141] libmachine: (old-k8s-version-131737)       <target dev='hda' bus='virtio'/>
	I0828 18:12:10.327560   69228 main.go:141] libmachine: (old-k8s-version-131737)     </disk>
	I0828 18:12:10.327573   69228 main.go:141] libmachine: (old-k8s-version-131737)     <interface type='network'>
	I0828 18:12:10.327588   69228 main.go:141] libmachine: (old-k8s-version-131737)       <source network='mk-old-k8s-version-131737'/>
	I0828 18:12:10.327607   69228 main.go:141] libmachine: (old-k8s-version-131737)       <model type='virtio'/>
	I0828 18:12:10.327627   69228 main.go:141] libmachine: (old-k8s-version-131737)     </interface>
	I0828 18:12:10.327638   69228 main.go:141] libmachine: (old-k8s-version-131737)     <interface type='network'>
	I0828 18:12:10.327668   69228 main.go:141] libmachine: (old-k8s-version-131737)       <source network='default'/>
	I0828 18:12:10.327691   69228 main.go:141] libmachine: (old-k8s-version-131737)       <model type='virtio'/>
	I0828 18:12:10.327704   69228 main.go:141] libmachine: (old-k8s-version-131737)     </interface>
	I0828 18:12:10.327719   69228 main.go:141] libmachine: (old-k8s-version-131737)     <serial type='pty'>
	I0828 18:12:10.327733   69228 main.go:141] libmachine: (old-k8s-version-131737)       <target port='0'/>
	I0828 18:12:10.327744   69228 main.go:141] libmachine: (old-k8s-version-131737)     </serial>
	I0828 18:12:10.327756   69228 main.go:141] libmachine: (old-k8s-version-131737)     <console type='pty'>
	I0828 18:12:10.327768   69228 main.go:141] libmachine: (old-k8s-version-131737)       <target type='serial' port='0'/>
	I0828 18:12:10.327781   69228 main.go:141] libmachine: (old-k8s-version-131737)     </console>
	I0828 18:12:10.327792   69228 main.go:141] libmachine: (old-k8s-version-131737)     <rng model='virtio'>
	I0828 18:12:10.327826   69228 main.go:141] libmachine: (old-k8s-version-131737)       <backend model='random'>/dev/random</backend>
	I0828 18:12:10.327850   69228 main.go:141] libmachine: (old-k8s-version-131737)     </rng>
	I0828 18:12:10.327862   69228 main.go:141] libmachine: (old-k8s-version-131737)     
	I0828 18:12:10.327874   69228 main.go:141] libmachine: (old-k8s-version-131737)     
	I0828 18:12:10.327887   69228 main.go:141] libmachine: (old-k8s-version-131737)   </devices>
	I0828 18:12:10.327900   69228 main.go:141] libmachine: (old-k8s-version-131737) </domain>
	I0828 18:12:10.327916   69228 main.go:141] libmachine: (old-k8s-version-131737) 
	I0828 18:12:10.335491   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:15:65:69 in network default
	I0828 18:12:10.336126   69228 main.go:141] libmachine: (old-k8s-version-131737) Ensuring networks are active...
	I0828 18:12:10.336152   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:10.336900   69228 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network default is active
	I0828 18:12:10.337324   69228 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network mk-old-k8s-version-131737 is active
	I0828 18:12:10.338067   69228 main.go:141] libmachine: (old-k8s-version-131737) Getting domain xml...
	I0828 18:12:10.339116   69228 main.go:141] libmachine: (old-k8s-version-131737) Creating domain...
	I0828 18:12:11.945989   69228 main.go:141] libmachine: (old-k8s-version-131737) Waiting to get IP...
	I0828 18:12:11.947087   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:11.948080   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:11.948112   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:11.947960   69681 retry.go:31] will retry after 216.959622ms: waiting for machine to come up
	I0828 18:12:12.166298   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:12.166782   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:12.166800   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:12.166700   69681 retry.go:31] will retry after 347.764496ms: waiting for machine to come up
	I0828 18:12:12.516674   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:12.517174   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:12.517195   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:12.517138   69681 retry.go:31] will retry after 467.400316ms: waiting for machine to come up
	I0828 18:12:12.986162   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:12.986737   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:12.986761   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:12.986672   69681 retry.go:31] will retry after 403.197013ms: waiting for machine to come up
	I0828 18:12:13.391378   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:13.392022   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:13.392045   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:13.391955   69681 retry.go:31] will retry after 536.307079ms: waiting for machine to come up
	I0828 18:12:13.930061   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:13.930154   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:13.930186   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:13.930071   69681 retry.go:31] will retry after 647.131964ms: waiting for machine to come up
	I0828 18:12:14.579044   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:14.579707   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:14.579741   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:14.579600   69681 retry.go:31] will retry after 1.179847133s: waiting for machine to come up
	I0828 18:12:15.760874   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:15.761606   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:15.761639   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:15.761553   69681 retry.go:31] will retry after 911.372502ms: waiting for machine to come up
	I0828 18:12:16.677969   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:16.678586   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:16.678616   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:16.678544   69681 retry.go:31] will retry after 1.192321531s: waiting for machine to come up
	I0828 18:12:17.872890   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:17.873329   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:17.873363   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:17.873298   69681 retry.go:31] will retry after 2.32398318s: waiting for machine to come up
	I0828 18:12:20.199472   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:20.200181   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:20.200208   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:20.200081   69681 retry.go:31] will retry after 1.751997633s: waiting for machine to come up
	I0828 18:12:21.953641   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:21.954175   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:21.954214   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:21.954171   69681 retry.go:31] will retry after 2.579760602s: waiting for machine to come up
	I0828 18:12:24.535063   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:24.535688   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:24.535737   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:24.535626   69681 retry.go:31] will retry after 4.036431206s: waiting for machine to come up
	I0828 18:12:28.575979   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:28.576504   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:12:28.576534   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:12:28.576463   69681 retry.go:31] will retry after 5.555381898s: waiting for machine to come up
	I0828 18:12:34.135740   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:34.136290   69228 main.go:141] libmachine: (old-k8s-version-131737) Found IP for machine: 192.168.50.99
	I0828 18:12:34.136318   69228 main.go:141] libmachine: (old-k8s-version-131737) Reserving static IP address...
	I0828 18:12:34.136332   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has current primary IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:34.136640   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"} in network mk-old-k8s-version-131737
	I0828 18:12:34.219747   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Getting to WaitForSSH function...
	I0828 18:12:34.219775   69228 main.go:141] libmachine: (old-k8s-version-131737) Reserved static IP address: 192.168.50.99
	I0828 18:12:34.219791   69228 main.go:141] libmachine: (old-k8s-version-131737) Waiting for SSH to be available...
	I0828 18:12:34.222346   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:34.222735   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737
	I0828 18:12:34.222762   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find defined IP address of network mk-old-k8s-version-131737 interface with MAC address 52:54:00:21:f1:8b
	I0828 18:12:34.222937   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH client type: external
	I0828 18:12:34.222962   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa (-rw-------)
	I0828 18:12:34.223010   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:12:34.223033   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | About to run SSH command:
	I0828 18:12:34.223050   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | exit 0
	I0828 18:12:34.226705   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | SSH cmd err, output: exit status 255: 
	I0828 18:12:34.226727   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0828 18:12:34.226734   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | command : exit 0
	I0828 18:12:34.226740   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | err     : exit status 255
	I0828 18:12:34.226748   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | output  : 
	I0828 18:12:37.229027   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Getting to WaitForSSH function...
	I0828 18:12:37.231737   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.232105   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:37.232124   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.232287   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH client type: external
	I0828 18:12:37.232308   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa (-rw-------)
	I0828 18:12:37.232335   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:12:37.232348   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | About to run SSH command:
	I0828 18:12:37.232365   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | exit 0
	I0828 18:12:37.358043   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | SSH cmd err, output: <nil>: 
	I0828 18:12:37.358356   69228 main.go:141] libmachine: (old-k8s-version-131737) KVM machine creation complete!
	I0828 18:12:37.358675   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetConfigRaw
	I0828 18:12:37.359230   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:12:37.359421   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:12:37.359565   69228 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 18:12:37.359579   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetState
	I0828 18:12:37.360869   69228 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 18:12:37.360882   69228 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 18:12:37.360887   69228 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 18:12:37.360893   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:12:37.363662   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.364011   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:37.364032   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.364243   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:12:37.364404   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:37.364561   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:37.364716   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:12:37.364955   69228 main.go:141] libmachine: Using SSH client type: native
	I0828 18:12:37.365196   69228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:12:37.365207   69228 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 18:12:37.461364   69228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:12:37.461387   69228 main.go:141] libmachine: Detecting the provisioner...
	I0828 18:12:37.461395   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:12:37.464010   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.464404   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:37.464443   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.464671   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:12:37.464860   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:37.465036   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:37.465140   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:12:37.465360   69228 main.go:141] libmachine: Using SSH client type: native
	I0828 18:12:37.465575   69228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:12:37.465587   69228 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 18:12:37.562905   69228 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 18:12:37.562970   69228 main.go:141] libmachine: found compatible host: buildroot
	I0828 18:12:37.562977   69228 main.go:141] libmachine: Provisioning with buildroot...
	I0828 18:12:37.562985   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:12:37.563247   69228 buildroot.go:166] provisioning hostname "old-k8s-version-131737"
	I0828 18:12:37.563290   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:12:37.563544   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:12:37.566560   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.567007   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:37.567047   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.567112   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:12:37.567313   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:37.567470   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:37.567625   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:12:37.567829   69228 main.go:141] libmachine: Using SSH client type: native
	I0828 18:12:37.568023   69228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:12:37.568041   69228 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-131737 && echo "old-k8s-version-131737" | sudo tee /etc/hostname
	I0828 18:12:37.677905   69228 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-131737
	
	I0828 18:12:37.677939   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:12:37.680729   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.681054   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:37.681083   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.681241   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:12:37.681440   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:37.681650   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:37.681821   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:12:37.682002   69228 main.go:141] libmachine: Using SSH client type: native
	I0828 18:12:37.682206   69228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:12:37.682224   69228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-131737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-131737/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-131737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:12:37.786501   69228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:12:37.786534   69228 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:12:37.786557   69228 buildroot.go:174] setting up certificates
	I0828 18:12:37.786569   69228 provision.go:84] configureAuth start
	I0828 18:12:37.786578   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:12:37.786924   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:12:37.789393   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.789819   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:37.789853   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.789939   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:12:37.792445   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.792819   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:37.792852   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:37.792928   69228 provision.go:143] copyHostCerts
	I0828 18:12:37.792982   69228 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:12:37.792997   69228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:12:37.793055   69228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:12:37.793155   69228 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:12:37.793162   69228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:12:37.793188   69228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:12:37.793291   69228 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:12:37.793302   69228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:12:37.793324   69228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:12:37.793378   69228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-131737 san=[127.0.0.1 192.168.50.99 localhost minikube old-k8s-version-131737]
	I0828 18:12:38.041239   69228 provision.go:177] copyRemoteCerts
	I0828 18:12:38.041292   69228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:12:38.041315   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:12:38.043650   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.043941   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:38.043971   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.044075   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:12:38.044255   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:38.044393   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:12:38.044533   69228 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:12:38.124760   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0828 18:12:38.148289   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:12:38.169987   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:12:38.191579   69228 provision.go:87] duration metric: took 404.997348ms to configureAuth
	I0828 18:12:38.191614   69228 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:12:38.191821   69228 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:12:38.191908   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:12:38.194501   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.194841   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:38.194862   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.195061   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:12:38.195277   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:38.195460   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:38.195627   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:12:38.195835   69228 main.go:141] libmachine: Using SSH client type: native
	I0828 18:12:38.196024   69228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:12:38.196043   69228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:12:38.419560   69228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:12:38.419585   69228 main.go:141] libmachine: Checking connection to Docker...
	I0828 18:12:38.419593   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetURL
	I0828 18:12:38.420938   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using libvirt version 6000000
	I0828 18:12:38.423540   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.423949   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:38.423979   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.424150   69228 main.go:141] libmachine: Docker is up and running!
	I0828 18:12:38.424168   69228 main.go:141] libmachine: Reticulating splines...
	I0828 18:12:38.424186   69228 client.go:171] duration metric: took 28.729926888s to LocalClient.Create
	I0828 18:12:38.424220   69228 start.go:167] duration metric: took 28.730009867s to libmachine.API.Create "old-k8s-version-131737"
	I0828 18:12:38.424246   69228 start.go:293] postStartSetup for "old-k8s-version-131737" (driver="kvm2")
	I0828 18:12:38.424262   69228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:12:38.424286   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:12:38.424511   69228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:12:38.424537   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:12:38.426950   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.427262   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:38.427290   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.427445   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:12:38.427610   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:38.427807   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:12:38.427949   69228 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:12:38.508167   69228 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:12:38.512049   69228 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:12:38.512074   69228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:12:38.512146   69228 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:12:38.512235   69228 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:12:38.512324   69228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:12:38.521305   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:12:38.544111   69228 start.go:296] duration metric: took 119.848711ms for postStartSetup
	I0828 18:12:38.544211   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetConfigRaw
	I0828 18:12:38.544828   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:12:38.547350   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.547760   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:38.547787   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.548045   69228 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:12:38.548248   69228 start.go:128] duration metric: took 28.965258698s to createHost
	I0828 18:12:38.548272   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:12:38.550430   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.550730   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:38.550759   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.550972   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:12:38.551177   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:38.551332   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:38.551517   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:12:38.551688   69228 main.go:141] libmachine: Using SSH client type: native
	I0828 18:12:38.551848   69228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:12:38.551858   69228 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:12:38.650391   69228 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724868758.627265876
	
	I0828 18:12:38.650413   69228 fix.go:216] guest clock: 1724868758.627265876
	I0828 18:12:38.650422   69228 fix.go:229] Guest: 2024-08-28 18:12:38.627265876 +0000 UTC Remote: 2024-08-28 18:12:38.548260777 +0000 UTC m=+57.572068803 (delta=79.005099ms)
	I0828 18:12:38.650446   69228 fix.go:200] guest clock delta is within tolerance: 79.005099ms
	I0828 18:12:38.650454   69228 start.go:83] releasing machines lock for "old-k8s-version-131737", held for 29.067674086s
	I0828 18:12:38.650485   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:12:38.650778   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:12:38.653416   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.653806   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:38.653836   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.654158   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:12:38.654754   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:12:38.654961   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:12:38.655072   69228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:12:38.655123   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:12:38.655168   69228 ssh_runner.go:195] Run: cat /version.json
	I0828 18:12:38.655183   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:12:38.657780   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.657966   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.658193   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:38.658221   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.658380   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:12:38.658448   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:38.658508   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:38.658553   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:38.658641   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:12:38.658741   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:12:38.658825   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:12:38.658888   69228 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:12:38.658951   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:12:38.659054   69228 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:12:38.772415   69228 ssh_runner.go:195] Run: systemctl --version
	I0828 18:12:38.778317   69228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:12:38.933661   69228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:12:38.939783   69228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:12:38.939843   69228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:12:38.958983   69228 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:12:38.959015   69228 start.go:495] detecting cgroup driver to use...
	I0828 18:12:38.959089   69228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:12:38.976260   69228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:12:38.992207   69228 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:12:38.992263   69228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:12:39.011121   69228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:12:39.027853   69228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:12:39.159758   69228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:12:39.300741   69228 docker.go:233] disabling docker service ...
	I0828 18:12:39.300816   69228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:12:39.315787   69228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:12:39.327824   69228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:12:39.477468   69228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:12:39.600913   69228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:12:39.614961   69228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:12:39.632413   69228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0828 18:12:39.632469   69228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:12:39.643396   69228 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:12:39.643464   69228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:12:39.654024   69228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:12:39.664152   69228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:12:39.674153   69228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:12:39.684687   69228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:12:39.694153   69228 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:12:39.694228   69228 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:12:39.707753   69228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:12:39.718616   69228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:12:39.855755   69228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:12:39.962377   69228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:12:39.962459   69228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:12:39.967624   69228 start.go:563] Will wait 60s for crictl version
	I0828 18:12:39.967695   69228 ssh_runner.go:195] Run: which crictl
	I0828 18:12:39.971275   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:12:40.010517   69228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:12:40.010600   69228 ssh_runner.go:195] Run: crio --version
	I0828 18:12:40.039650   69228 ssh_runner.go:195] Run: crio --version
	I0828 18:12:40.076662   69228 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0828 18:12:40.077699   69228 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:12:40.080636   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:40.081002   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:12:26 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:12:40.081036   69228 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:12:40.081244   69228 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:12:40.085443   69228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:12:40.097472   69228 kubeadm.go:883] updating cluster {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:12:40.097596   69228 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:12:40.097641   69228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:12:40.126899   69228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:12:40.126990   69228 ssh_runner.go:195] Run: which lz4
	I0828 18:12:40.130806   69228 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:12:40.134979   69228 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:12:40.135013   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0828 18:12:41.699678   69228 crio.go:462] duration metric: took 1.568907378s to copy over tarball
	I0828 18:12:41.699776   69228 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:12:44.579371   69228 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.879559147s)
	I0828 18:12:44.579402   69228 crio.go:469] duration metric: took 2.879690624s to extract the tarball
	I0828 18:12:44.579413   69228 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:12:44.624043   69228 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:12:44.671292   69228 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:12:44.671315   69228 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:12:44.671405   69228 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:12:44.671693   69228 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0828 18:12:44.671719   69228 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:12:44.671847   69228 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:12:44.671895   69228 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:12:44.671966   69228 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0828 18:12:44.672019   69228 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:12:44.671698   69228 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:12:44.673055   69228 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:12:44.673152   69228 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0828 18:12:44.673173   69228 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:12:44.673218   69228 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:12:44.673235   69228 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:12:44.673297   69228 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:12:44.673295   69228 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0828 18:12:44.673061   69228 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:12:44.886810   69228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:12:44.928999   69228 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0828 18:12:44.929051   69228 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:12:44.929110   69228 ssh_runner.go:195] Run: which crictl
	I0828 18:12:44.930869   69228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0828 18:12:44.933320   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:12:44.958750   69228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:12:44.965116   69228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:12:44.974573   69228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:12:45.006498   69228 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0828 18:12:45.006549   69228 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0828 18:12:45.006604   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:12:45.006611   69228 ssh_runner.go:195] Run: which crictl
	I0828 18:12:45.013898   69228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0828 18:12:45.024864   69228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0828 18:12:45.045462   69228 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0828 18:12:45.045513   69228 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:12:45.045569   69228 ssh_runner.go:195] Run: which crictl
	I0828 18:12:45.078037   69228 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0828 18:12:45.078099   69228 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:12:45.078157   69228 ssh_runner.go:195] Run: which crictl
	I0828 18:12:45.130766   69228 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0828 18:12:45.130815   69228 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:12:45.130866   69228 ssh_runner.go:195] Run: which crictl
	I0828 18:12:45.136101   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:12:45.136114   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:12:45.150526   69228 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0828 18:12:45.150577   69228 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0828 18:12:45.150583   69228 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0828 18:12:45.150618   69228 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:12:45.150628   69228 ssh_runner.go:195] Run: which crictl
	I0828 18:12:45.150663   69228 ssh_runner.go:195] Run: which crictl
	I0828 18:12:45.150724   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:12:45.150769   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:12:45.150804   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:12:45.248891   69228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0828 18:12:45.248932   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:12:45.248995   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:12:45.249000   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:12:45.279160   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:12:45.279193   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:12:45.279285   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:12:45.352876   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:12:45.382481   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:12:45.382500   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:12:45.412718   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:12:45.412788   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:12:45.412815   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:12:45.453214   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:12:45.492743   69228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0828 18:12:45.517610   69228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0828 18:12:45.562269   69228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0828 18:12:45.562344   69228 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:12:45.562401   69228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0828 18:12:45.574603   69228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0828 18:12:45.597843   69228 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0828 18:12:45.890858   69228 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:12:46.039990   69228 cache_images.go:92] duration metric: took 1.368656386s to LoadCachedImages
	W0828 18:12:46.040091   69228 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0828 18:12:46.040109   69228 kubeadm.go:934] updating node { 192.168.50.99 8443 v1.20.0 crio true true} ...
	I0828 18:12:46.040216   69228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-131737 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:12:46.040305   69228 ssh_runner.go:195] Run: crio config
	I0828 18:12:46.089096   69228 cni.go:84] Creating CNI manager for ""
	I0828 18:12:46.089118   69228 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:12:46.089129   69228 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:12:46.089150   69228 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.99 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-131737 NodeName:old-k8s-version-131737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0828 18:12:46.089305   69228 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-131737"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:12:46.089386   69228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0828 18:12:46.099493   69228 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:12:46.099575   69228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:12:46.109152   69228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0828 18:12:46.127330   69228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:12:46.143060   69228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0828 18:12:46.164864   69228 ssh_runner.go:195] Run: grep 192.168.50.99	control-plane.minikube.internal$ /etc/hosts
	I0828 18:12:46.170162   69228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.99	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:12:46.188767   69228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:12:46.321177   69228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:12:46.338205   69228 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737 for IP: 192.168.50.99
	I0828 18:12:46.338228   69228 certs.go:194] generating shared ca certs ...
	I0828 18:12:46.338255   69228 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:12:46.338432   69228 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:12:46.338490   69228 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:12:46.338504   69228 certs.go:256] generating profile certs ...
	I0828 18:12:46.338572   69228 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.key
	I0828 18:12:46.338589   69228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.crt with IP's: []
	I0828 18:12:46.458906   69228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.crt ...
	I0828 18:12:46.458946   69228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.crt: {Name:mk5a2b88dd3316ea18a94f40a6b562e755e93eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:12:46.459167   69228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.key ...
	I0828 18:12:46.459191   69228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.key: {Name:mk31fa0c002e762f00dd3919cf0f8a4bf7070310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:12:46.459308   69228 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key.131f8aa0
	I0828 18:12:46.459332   69228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt.131f8aa0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.99]
	I0828 18:12:46.660180   69228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt.131f8aa0 ...
	I0828 18:12:46.660208   69228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt.131f8aa0: {Name:mk5a98aab293c282baaca80dc3bd4fafb1217c86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:12:46.660394   69228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key.131f8aa0 ...
	I0828 18:12:46.660416   69228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key.131f8aa0: {Name:mkc879acb3b8e1cbbfee13b68880c8b27233d735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:12:46.660511   69228 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt.131f8aa0 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt
	I0828 18:12:46.660618   69228 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key.131f8aa0 -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key
	I0828 18:12:46.660716   69228 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key
	I0828 18:12:46.660738   69228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.crt with IP's: []
	I0828 18:12:46.891608   69228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.crt ...
	I0828 18:12:46.891643   69228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.crt: {Name:mkfeee50d742c3d76b55584377cb836c90c77f9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:12:46.891816   69228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key ...
	I0828 18:12:46.891831   69228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key: {Name:mkb02b9ec205644f3afcfabd10340f515b567540 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:12:46.891993   69228 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:12:46.892030   69228 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:12:46.892040   69228 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:12:46.892061   69228 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:12:46.892084   69228 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:12:46.892105   69228 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:12:46.892147   69228 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:12:46.892721   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:12:46.920315   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:12:46.945612   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:12:46.972901   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:12:46.997308   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 18:12:47.027523   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:12:47.055549   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:12:47.080872   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:12:47.105078   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:12:47.128925   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:12:47.154974   69228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:12:47.178153   69228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:12:47.194482   69228 ssh_runner.go:195] Run: openssl version
	I0828 18:12:47.200034   69228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:12:47.212947   69228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:12:47.217505   69228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:12:47.217583   69228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:12:47.223559   69228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:12:47.234593   69228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:12:47.250934   69228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:12:47.256800   69228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:12:47.256868   69228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:12:47.265163   69228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:12:47.277803   69228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:12:47.290232   69228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:12:47.294938   69228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:12:47.295008   69228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:12:47.303214   69228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:12:47.316358   69228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:12:47.320654   69228 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 18:12:47.320714   69228 kubeadm.go:392] StartCluster: {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:12:47.320806   69228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:12:47.320890   69228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:12:47.363749   69228 cri.go:89] found id: ""
	I0828 18:12:47.363822   69228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:12:47.374523   69228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:12:47.384186   69228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:12:47.393753   69228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:12:47.393777   69228 kubeadm.go:157] found existing configuration files:
	
	I0828 18:12:47.393829   69228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:12:47.402962   69228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:12:47.403033   69228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:12:47.412707   69228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:12:47.422836   69228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:12:47.422900   69228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:12:47.433398   69228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:12:47.444425   69228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:12:47.444501   69228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:12:47.455276   69228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:12:47.466453   69228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:12:47.466516   69228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:12:47.477011   69228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:12:47.591736   69228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:12:47.591806   69228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:12:47.731650   69228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:12:47.731821   69228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:12:47.731970   69228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:12:47.929516   69228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:12:47.931803   69228 out.go:235]   - Generating certificates and keys ...
	I0828 18:12:47.931912   69228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:12:47.931990   69228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:12:48.114906   69228 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 18:12:48.242868   69228 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 18:12:48.479147   69228 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 18:12:48.643983   69228 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 18:12:48.983223   69228 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 18:12:48.983436   69228 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-131737] and IPs [192.168.50.99 127.0.0.1 ::1]
	I0828 18:12:49.631814   69228 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 18:12:49.632052   69228 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-131737] and IPs [192.168.50.99 127.0.0.1 ::1]
	I0828 18:12:49.865713   69228 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 18:12:50.082842   69228 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 18:12:50.173497   69228 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 18:12:50.173602   69228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:12:50.334241   69228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:12:50.626954   69228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:12:50.716403   69228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:12:50.801112   69228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:12:50.817297   69228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:12:50.819025   69228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:12:50.819097   69228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:12:50.957084   69228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:12:50.958773   69228 out.go:235]   - Booting up control plane ...
	I0828 18:12:50.958910   69228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:12:50.963591   69228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:12:50.964956   69228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:12:50.966298   69228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:12:50.972701   69228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:13:30.969560   69228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:13:30.970166   69228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:13:30.970390   69228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:13:35.971761   69228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:13:35.971984   69228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:13:45.970292   69228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:13:45.970569   69228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:14:05.970030   69228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:14:05.970348   69228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:14:45.971677   69228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:14:45.971877   69228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:14:45.971908   69228 kubeadm.go:310] 
	I0828 18:14:45.971988   69228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:14:45.972054   69228 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:14:45.972075   69228 kubeadm.go:310] 
	I0828 18:14:45.972134   69228 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:14:45.972171   69228 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:14:45.972305   69228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:14:45.972317   69228 kubeadm.go:310] 
	I0828 18:14:45.972494   69228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:14:45.972557   69228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:14:45.972593   69228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:14:45.972600   69228 kubeadm.go:310] 
	I0828 18:14:45.972727   69228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:14:45.972837   69228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:14:45.972857   69228 kubeadm.go:310] 
	I0828 18:14:45.972949   69228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:14:45.973024   69228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:14:45.973162   69228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:14:45.973282   69228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:14:45.973302   69228 kubeadm.go:310] 
	I0828 18:14:45.974012   69228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:14:45.974173   69228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:14:45.974335   69228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0828 18:14:45.974380   69228 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-131737] and IPs [192.168.50.99 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-131737] and IPs [192.168.50.99 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-131737] and IPs [192.168.50.99 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-131737] and IPs [192.168.50.99 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0828 18:14:45.974425   69228 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:14:46.430384   69228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:14:46.443967   69228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:14:46.453342   69228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:14:46.453369   69228 kubeadm.go:157] found existing configuration files:
	
	I0828 18:14:46.453415   69228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:14:46.461867   69228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:14:46.461934   69228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:14:46.470934   69228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:14:46.479394   69228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:14:46.479464   69228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:14:46.488209   69228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:14:46.496575   69228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:14:46.496654   69228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:14:46.505136   69228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:14:46.513524   69228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:14:46.513586   69228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:14:46.522765   69228 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:14:46.586063   69228 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:14:46.586163   69228 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:14:46.717154   69228 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:14:46.717308   69228 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:14:46.717464   69228 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:14:46.887272   69228 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:14:46.889256   69228 out.go:235]   - Generating certificates and keys ...
	I0828 18:14:46.889418   69228 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:14:46.889564   69228 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:14:46.889740   69228 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:14:46.889943   69228 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:14:46.890123   69228 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:14:46.890257   69228 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:14:46.890776   69228 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:14:46.890884   69228 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:14:46.890989   69228 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:14:46.891114   69228 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:14:46.891186   69228 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:14:46.891281   69228 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:14:47.224498   69228 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:14:47.296280   69228 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:14:47.634545   69228 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:14:48.088159   69228 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:14:48.107486   69228 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:14:48.108453   69228 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:14:48.108541   69228 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:14:48.244682   69228 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:14:48.246846   69228 out.go:235]   - Booting up control plane ...
	I0828 18:14:48.246990   69228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:14:48.250848   69228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:14:48.253783   69228 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:14:48.253912   69228 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:14:48.256855   69228 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:15:28.259672   69228 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:15:28.260102   69228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:15:28.260329   69228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:15:33.261100   69228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:15:33.261284   69228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:15:43.261594   69228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:15:43.261860   69228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:16:03.260886   69228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:16:03.261098   69228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:16:43.260880   69228 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:16:43.261138   69228 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:16:43.261162   69228 kubeadm.go:310] 
	I0828 18:16:43.261215   69228 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:16:43.261268   69228 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:16:43.261275   69228 kubeadm.go:310] 
	I0828 18:16:43.261319   69228 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:16:43.261366   69228 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:16:43.261456   69228 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:16:43.261463   69228 kubeadm.go:310] 
	I0828 18:16:43.261590   69228 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:16:43.261639   69228 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:16:43.261684   69228 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:16:43.261695   69228 kubeadm.go:310] 
	I0828 18:16:43.261805   69228 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:16:43.261928   69228 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:16:43.261950   69228 kubeadm.go:310] 
	I0828 18:16:43.262087   69228 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:16:43.262202   69228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:16:43.262308   69228 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:16:43.262388   69228 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:16:43.262406   69228 kubeadm.go:310] 
	I0828 18:16:43.262910   69228 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:16:43.263022   69228 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:16:43.263113   69228 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:16:43.263181   69228 kubeadm.go:394] duration metric: took 3m55.942472311s to StartCluster
	I0828 18:16:43.263216   69228 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:16:43.263268   69228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:16:43.309426   69228 cri.go:89] found id: ""
	I0828 18:16:43.309466   69228 logs.go:276] 0 containers: []
	W0828 18:16:43.309474   69228 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:16:43.309479   69228 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:16:43.309548   69228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:16:43.343834   69228 cri.go:89] found id: ""
	I0828 18:16:43.343864   69228 logs.go:276] 0 containers: []
	W0828 18:16:43.343874   69228 logs.go:278] No container was found matching "etcd"
	I0828 18:16:43.343882   69228 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:16:43.343945   69228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:16:43.378106   69228 cri.go:89] found id: ""
	I0828 18:16:43.378131   69228 logs.go:276] 0 containers: []
	W0828 18:16:43.378138   69228 logs.go:278] No container was found matching "coredns"
	I0828 18:16:43.378143   69228 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:16:43.378191   69228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:16:43.414229   69228 cri.go:89] found id: ""
	I0828 18:16:43.414257   69228 logs.go:276] 0 containers: []
	W0828 18:16:43.414267   69228 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:16:43.414274   69228 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:16:43.414338   69228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:16:43.445926   69228 cri.go:89] found id: ""
	I0828 18:16:43.445953   69228 logs.go:276] 0 containers: []
	W0828 18:16:43.445960   69228 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:16:43.445966   69228 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:16:43.446018   69228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:16:43.479915   69228 cri.go:89] found id: ""
	I0828 18:16:43.479945   69228 logs.go:276] 0 containers: []
	W0828 18:16:43.479955   69228 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:16:43.479963   69228 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:16:43.480027   69228 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:16:43.510983   69228 cri.go:89] found id: ""
	I0828 18:16:43.511014   69228 logs.go:276] 0 containers: []
	W0828 18:16:43.511023   69228 logs.go:278] No container was found matching "kindnet"
	I0828 18:16:43.511032   69228 logs.go:123] Gathering logs for kubelet ...
	I0828 18:16:43.511044   69228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:16:43.563764   69228 logs.go:123] Gathering logs for dmesg ...
	I0828 18:16:43.563803   69228 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:16:43.576449   69228 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:16:43.576472   69228 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:16:43.691208   69228 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:16:43.691232   69228 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:16:43.691248   69228 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:16:43.812438   69228 logs.go:123] Gathering logs for container status ...
	I0828 18:16:43.812476   69228 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0828 18:16:43.857576   69228 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0828 18:16:43.857626   69228 out.go:270] * 
	* 
	W0828 18:16:43.857686   69228 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:16:43.857705   69228 out.go:270] * 
	* 
	W0828 18:16:43.858817   69228 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:16:43.862026   69228 out.go:201] 
	W0828 18:16:43.863239   69228 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:16:43.863278   69228 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0828 18:16:43.863296   69228 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0828 18:16:43.864775   69228 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-131737 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 6 (213.167467ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:16:44.120850   75997 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-131737" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-131737" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (303.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-072854 --alsologtostderr -v=3
E0828 18:14:11.659761   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:21.459397   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:21.465795   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:21.477140   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:21.498523   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:21.540255   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:21.621792   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:21.783363   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:22.105000   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:22.747048   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:23.524567   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:24.029248   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:14:26.590783   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-072854 --alsologtostderr -v=3: exit status 82 (2m0.55176903s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-072854"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:14:05.206039   75005 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:14:05.206220   75005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:14:05.206269   75005 out.go:358] Setting ErrFile to fd 2...
	I0828 18:14:05.206287   75005 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:14:05.206513   75005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:14:05.206766   75005 out.go:352] Setting JSON to false
	I0828 18:14:05.206858   75005 mustload.go:65] Loading cluster: no-preload-072854
	I0828 18:14:05.207195   75005 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:14:05.207278   75005 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/config.json ...
	I0828 18:14:05.207458   75005 mustload.go:65] Loading cluster: no-preload-072854
	I0828 18:14:05.207587   75005 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:14:05.207625   75005 stop.go:39] StopHost: no-preload-072854
	I0828 18:14:05.208055   75005 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:14:05.208120   75005 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:14:05.225314   75005 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44111
	I0828 18:14:05.225878   75005 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:14:05.226764   75005 main.go:141] libmachine: Using API Version  1
	I0828 18:14:05.226791   75005 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:14:05.227294   75005 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:14:05.230138   75005 out.go:177] * Stopping node "no-preload-072854"  ...
	I0828 18:14:05.231465   75005 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0828 18:14:05.231504   75005 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:14:05.231779   75005 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0828 18:14:05.231834   75005 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:14:05.235732   75005 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:14:05.236319   75005 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:12:54 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:14:05.236385   75005 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:14:05.236721   75005 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:14:05.236974   75005 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:14:05.237183   75005 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:14:05.237330   75005 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:14:05.360273   75005 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0828 18:14:05.420986   75005 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0828 18:14:05.488223   75005 main.go:141] libmachine: Stopping "no-preload-072854"...
	I0828 18:14:05.488257   75005 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:14:05.490331   75005 main.go:141] libmachine: (no-preload-072854) Calling .Stop
	I0828 18:14:05.494875   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 0/120
	I0828 18:14:06.496737   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 1/120
	I0828 18:14:07.498281   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 2/120
	I0828 18:14:08.499861   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 3/120
	I0828 18:14:09.501952   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 4/120
	I0828 18:14:10.504074   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 5/120
	I0828 18:14:11.505343   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 6/120
	I0828 18:14:12.506798   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 7/120
	I0828 18:14:13.508296   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 8/120
	I0828 18:14:14.509715   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 9/120
	I0828 18:14:15.511019   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 10/120
	I0828 18:14:16.513156   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 11/120
	I0828 18:14:17.515006   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 12/120
	I0828 18:14:18.516740   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 13/120
	I0828 18:14:19.518583   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 14/120
	I0828 18:14:20.520609   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 15/120
	I0828 18:14:21.521926   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 16/120
	I0828 18:14:22.523248   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 17/120
	I0828 18:14:23.524653   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 18/120
	I0828 18:14:24.526397   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 19/120
	I0828 18:14:25.528404   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 20/120
	I0828 18:14:26.529980   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 21/120
	I0828 18:14:27.531759   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 22/120
	I0828 18:14:28.533525   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 23/120
	I0828 18:14:29.534932   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 24/120
	I0828 18:14:30.536905   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 25/120
	I0828 18:14:31.538165   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 26/120
	I0828 18:14:32.539634   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 27/120
	I0828 18:14:33.541755   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 28/120
	I0828 18:14:34.543936   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 29/120
	I0828 18:14:35.546057   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 30/120
	I0828 18:14:36.548500   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 31/120
	I0828 18:14:37.550032   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 32/120
	I0828 18:14:38.551586   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 33/120
	I0828 18:14:39.552799   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 34/120
	I0828 18:14:40.554441   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 35/120
	I0828 18:14:41.556666   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 36/120
	I0828 18:14:42.557964   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 37/120
	I0828 18:14:43.559256   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 38/120
	I0828 18:14:44.560598   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 39/120
	I0828 18:14:45.563159   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 40/120
	I0828 18:14:46.564748   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 41/120
	I0828 18:14:47.566214   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 42/120
	I0828 18:14:48.568419   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 43/120
	I0828 18:14:49.569790   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 44/120
	I0828 18:14:50.571620   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 45/120
	I0828 18:14:51.572933   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 46/120
	I0828 18:14:52.574396   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 47/120
	I0828 18:14:53.575892   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 48/120
	I0828 18:14:54.577289   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 49/120
	I0828 18:14:55.579372   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 50/120
	I0828 18:14:56.580606   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 51/120
	I0828 18:14:57.581912   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 52/120
	I0828 18:14:58.583309   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 53/120
	I0828 18:14:59.584941   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 54/120
	I0828 18:15:00.586869   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 55/120
	I0828 18:15:01.588317   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 56/120
	I0828 18:15:02.589630   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 57/120
	I0828 18:15:03.591005   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 58/120
	I0828 18:15:04.592351   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 59/120
	I0828 18:15:05.594392   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 60/120
	I0828 18:15:06.595721   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 61/120
	I0828 18:15:07.597116   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 62/120
	I0828 18:15:08.598580   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 63/120
	I0828 18:15:09.599894   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 64/120
	I0828 18:15:10.601810   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 65/120
	I0828 18:15:11.603150   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 66/120
	I0828 18:15:12.604590   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 67/120
	I0828 18:15:13.605864   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 68/120
	I0828 18:15:14.607302   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 69/120
	I0828 18:15:15.609496   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 70/120
	I0828 18:15:16.611005   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 71/120
	I0828 18:15:17.612354   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 72/120
	I0828 18:15:18.613865   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 73/120
	I0828 18:15:19.615259   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 74/120
	I0828 18:15:20.617349   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 75/120
	I0828 18:15:21.618710   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 76/120
	I0828 18:15:22.620066   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 77/120
	I0828 18:15:23.621535   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 78/120
	I0828 18:15:24.622825   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 79/120
	I0828 18:15:25.625018   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 80/120
	I0828 18:15:26.626429   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 81/120
	I0828 18:15:27.627751   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 82/120
	I0828 18:15:28.629053   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 83/120
	I0828 18:15:29.630684   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 84/120
	I0828 18:15:30.632411   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 85/120
	I0828 18:15:31.633722   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 86/120
	I0828 18:15:32.635013   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 87/120
	I0828 18:15:33.636362   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 88/120
	I0828 18:15:34.637849   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 89/120
	I0828 18:15:35.640199   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 90/120
	I0828 18:15:36.641542   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 91/120
	I0828 18:15:37.643192   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 92/120
	I0828 18:15:38.644508   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 93/120
	I0828 18:15:39.645803   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 94/120
	I0828 18:15:40.647591   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 95/120
	I0828 18:15:41.648858   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 96/120
	I0828 18:15:42.650161   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 97/120
	I0828 18:15:43.651467   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 98/120
	I0828 18:15:44.653522   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 99/120
	I0828 18:15:45.655531   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 100/120
	I0828 18:15:46.657070   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 101/120
	I0828 18:15:47.658665   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 102/120
	I0828 18:15:48.660144   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 103/120
	I0828 18:15:49.661451   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 104/120
	I0828 18:15:50.663528   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 105/120
	I0828 18:15:51.664981   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 106/120
	I0828 18:15:52.666250   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 107/120
	I0828 18:15:53.667599   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 108/120
	I0828 18:15:54.668827   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 109/120
	I0828 18:15:55.670940   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 110/120
	I0828 18:15:56.672229   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 111/120
	I0828 18:15:57.673632   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 112/120
	I0828 18:15:58.674970   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 113/120
	I0828 18:15:59.676327   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 114/120
	I0828 18:16:00.678339   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 115/120
	I0828 18:16:01.679656   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 116/120
	I0828 18:16:02.680965   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 117/120
	I0828 18:16:03.682253   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 118/120
	I0828 18:16:04.683661   75005 main.go:141] libmachine: (no-preload-072854) Waiting for machine to stop 119/120
	I0828 18:16:05.684890   75005 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0828 18:16:05.684954   75005 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0828 18:16:05.686819   75005 out.go:201] 
	W0828 18:16:05.688045   75005 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0828 18:16:05.688061   75005 out.go:270] * 
	* 
	W0828 18:16:05.690713   75005 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:16:05.692211   75005 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-072854 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-072854 -n no-preload-072854
E0828 18:16:06.243607   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:08.805489   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:13.927646   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-072854 -n no-preload-072854: exit status 3 (18.44507359s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:16:24.138458   75701 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.138:22: connect: no route to host
	E0828 18:16:24.138479   75701 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.138:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-072854" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-014980 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-014980 --alsologtostderr -v=3: exit status 82 (2m0.485711485s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-014980"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:14:41.998617   75296 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:14:41.998879   75296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:14:41.998889   75296 out.go:358] Setting ErrFile to fd 2...
	I0828 18:14:41.998894   75296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:14:41.999099   75296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:14:41.999371   75296 out.go:352] Setting JSON to false
	I0828 18:14:41.999470   75296 mustload.go:65] Loading cluster: embed-certs-014980
	I0828 18:14:41.999791   75296 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:14:41.999883   75296 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/config.json ...
	I0828 18:14:42.000052   75296 mustload.go:65] Loading cluster: embed-certs-014980
	I0828 18:14:42.000178   75296 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:14:42.000220   75296 stop.go:39] StopHost: embed-certs-014980
	I0828 18:14:42.000618   75296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:14:42.000656   75296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:14:42.015770   75296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0828 18:14:42.016208   75296 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:14:42.016996   75296 main.go:141] libmachine: Using API Version  1
	I0828 18:14:42.017022   75296 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:14:42.017470   75296 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:14:42.019742   75296 out.go:177] * Stopping node "embed-certs-014980"  ...
	I0828 18:14:42.021237   75296 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0828 18:14:42.021277   75296 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:14:42.021511   75296 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0828 18:14:42.021534   75296 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:14:42.024373   75296 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:14:42.024758   75296 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:13:25 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:14:42.024787   75296 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:14:42.024942   75296 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:14:42.025073   75296 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:14:42.025180   75296 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:14:42.025341   75296 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:14:42.136609   75296 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0828 18:14:42.196013   75296 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0828 18:14:42.243219   75296 main.go:141] libmachine: Stopping "embed-certs-014980"...
	I0828 18:14:42.243254   75296 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:14:42.245356   75296 main.go:141] libmachine: (embed-certs-014980) Calling .Stop
	I0828 18:14:42.249216   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 0/120
	I0828 18:14:43.250459   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 1/120
	I0828 18:14:44.251848   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 2/120
	I0828 18:14:45.253456   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 3/120
	I0828 18:14:46.254799   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 4/120
	I0828 18:14:47.257260   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 5/120
	I0828 18:14:48.259242   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 6/120
	I0828 18:14:49.260647   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 7/120
	I0828 18:14:50.261941   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 8/120
	I0828 18:14:51.263350   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 9/120
	I0828 18:14:52.264783   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 10/120
	I0828 18:14:53.266006   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 11/120
	I0828 18:14:54.267358   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 12/120
	I0828 18:14:55.268611   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 13/120
	I0828 18:14:56.269952   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 14/120
	I0828 18:14:57.272025   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 15/120
	I0828 18:14:58.273355   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 16/120
	I0828 18:14:59.274770   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 17/120
	I0828 18:15:00.276748   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 18/120
	I0828 18:15:01.278039   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 19/120
	I0828 18:15:02.280289   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 20/120
	I0828 18:15:03.281525   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 21/120
	I0828 18:15:04.282772   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 22/120
	I0828 18:15:05.283975   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 23/120
	I0828 18:15:06.285264   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 24/120
	I0828 18:15:07.287364   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 25/120
	I0828 18:15:08.288726   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 26/120
	I0828 18:15:09.290206   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 27/120
	I0828 18:15:10.291665   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 28/120
	I0828 18:15:11.293181   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 29/120
	I0828 18:15:12.295377   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 30/120
	I0828 18:15:13.296895   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 31/120
	I0828 18:15:14.298342   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 32/120
	I0828 18:15:15.299882   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 33/120
	I0828 18:15:16.301316   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 34/120
	I0828 18:15:17.303308   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 35/120
	I0828 18:15:18.304632   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 36/120
	I0828 18:15:19.306146   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 37/120
	I0828 18:15:20.307466   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 38/120
	I0828 18:15:21.309136   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 39/120
	I0828 18:15:22.311351   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 40/120
	I0828 18:15:23.313349   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 41/120
	I0828 18:15:24.314787   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 42/120
	I0828 18:15:25.316207   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 43/120
	I0828 18:15:26.317540   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 44/120
	I0828 18:15:27.319493   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 45/120
	I0828 18:15:28.320803   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 46/120
	I0828 18:15:29.322321   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 47/120
	I0828 18:15:30.323563   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 48/120
	I0828 18:15:31.325020   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 49/120
	I0828 18:15:32.327461   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 50/120
	I0828 18:15:33.329006   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 51/120
	I0828 18:15:34.330597   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 52/120
	I0828 18:15:35.332108   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 53/120
	I0828 18:15:36.333665   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 54/120
	I0828 18:15:37.335684   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 55/120
	I0828 18:15:38.337243   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 56/120
	I0828 18:15:39.338729   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 57/120
	I0828 18:15:40.340289   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 58/120
	I0828 18:15:41.341909   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 59/120
	I0828 18:15:42.344210   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 60/120
	I0828 18:15:43.345597   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 61/120
	I0828 18:15:44.347077   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 62/120
	I0828 18:15:45.348421   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 63/120
	I0828 18:15:46.349811   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 64/120
	I0828 18:15:47.351861   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 65/120
	I0828 18:15:48.353396   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 66/120
	I0828 18:15:49.354831   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 67/120
	I0828 18:15:50.356381   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 68/120
	I0828 18:15:51.358484   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 69/120
	I0828 18:15:52.360823   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 70/120
	I0828 18:15:53.362217   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 71/120
	I0828 18:15:54.363504   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 72/120
	I0828 18:15:55.364848   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 73/120
	I0828 18:15:56.366398   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 74/120
	I0828 18:15:57.368405   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 75/120
	I0828 18:15:58.369651   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 76/120
	I0828 18:15:59.371132   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 77/120
	I0828 18:16:00.372383   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 78/120
	I0828 18:16:01.373773   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 79/120
	I0828 18:16:02.376071   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 80/120
	I0828 18:16:03.377445   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 81/120
	I0828 18:16:04.378756   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 82/120
	I0828 18:16:05.380065   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 83/120
	I0828 18:16:06.381331   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 84/120
	I0828 18:16:07.383315   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 85/120
	I0828 18:16:08.384691   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 86/120
	I0828 18:16:09.386025   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 87/120
	I0828 18:16:10.387303   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 88/120
	I0828 18:16:11.388667   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 89/120
	I0828 18:16:12.390039   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 90/120
	I0828 18:16:13.391333   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 91/120
	I0828 18:16:14.392618   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 92/120
	I0828 18:16:15.393967   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 93/120
	I0828 18:16:16.395402   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 94/120
	I0828 18:16:17.397327   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 95/120
	I0828 18:16:18.398746   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 96/120
	I0828 18:16:19.400027   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 97/120
	I0828 18:16:20.401278   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 98/120
	I0828 18:16:21.402670   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 99/120
	I0828 18:16:22.403870   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 100/120
	I0828 18:16:23.405243   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 101/120
	I0828 18:16:24.406550   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 102/120
	I0828 18:16:25.408169   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 103/120
	I0828 18:16:26.409647   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 104/120
	I0828 18:16:27.411540   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 105/120
	I0828 18:16:28.412836   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 106/120
	I0828 18:16:29.414327   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 107/120
	I0828 18:16:30.415761   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 108/120
	I0828 18:16:31.417197   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 109/120
	I0828 18:16:32.419409   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 110/120
	I0828 18:16:33.420802   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 111/120
	I0828 18:16:34.422120   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 112/120
	I0828 18:16:35.423382   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 113/120
	I0828 18:16:36.424885   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 114/120
	I0828 18:16:37.427186   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 115/120
	I0828 18:16:38.428758   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 116/120
	I0828 18:16:39.430549   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 117/120
	I0828 18:16:40.431870   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 118/120
	I0828 18:16:41.433259   75296 main.go:141] libmachine: (embed-certs-014980) Waiting for machine to stop 119/120
	I0828 18:16:42.434607   75296 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0828 18:16:42.434659   75296 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0828 18:16:42.436583   75296 out.go:201] 
	W0828 18:16:42.437818   75296 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0828 18:16:42.437850   75296 out.go:270] * 
	* 
	W0828 18:16:42.440637   75296 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:16:42.441761   75296 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-014980 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-014980 -n embed-certs-014980
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-014980 -n embed-certs-014980: exit status 3 (18.55875425s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:17:01.002372   75965 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host
	E0828 18:17:01.002391   75965 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-014980" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-640552 --alsologtostderr -v=3
E0828 18:15:02.436741   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:13.103585   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:43.398995   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:44.865351   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:44.871689   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:44.882990   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:44.904357   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:44.945712   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:45.027195   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:45.188705   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:45.509951   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:46.151356   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:47.433253   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:49.995409   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:15:55.116822   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:03.674109   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:03.680442   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:03.692507   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:03.713888   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:03.755305   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:03.836804   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:03.998301   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:04.320021   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:04.961476   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:05.358265   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-640552 --alsologtostderr -v=3: exit status 82 (2m0.495500869s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-640552"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:14:44.187226   75365 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:14:44.187470   75365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:14:44.187481   75365 out.go:358] Setting ErrFile to fd 2...
	I0828 18:14:44.187487   75365 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:14:44.187950   75365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:14:44.188294   75365 out.go:352] Setting JSON to false
	I0828 18:14:44.188527   75365 mustload.go:65] Loading cluster: default-k8s-diff-port-640552
	I0828 18:14:44.189013   75365 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:14:44.189089   75365 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/config.json ...
	I0828 18:14:44.189272   75365 mustload.go:65] Loading cluster: default-k8s-diff-port-640552
	I0828 18:14:44.189372   75365 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:14:44.189412   75365 stop.go:39] StopHost: default-k8s-diff-port-640552
	I0828 18:14:44.189753   75365 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:14:44.189794   75365 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:14:44.204586   75365 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33473
	I0828 18:14:44.205081   75365 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:14:44.205588   75365 main.go:141] libmachine: Using API Version  1
	I0828 18:14:44.205618   75365 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:14:44.205924   75365 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:14:44.208391   75365 out.go:177] * Stopping node "default-k8s-diff-port-640552"  ...
	I0828 18:14:44.209713   75365 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0828 18:14:44.209747   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:14:44.209995   75365 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0828 18:14:44.210022   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:14:44.212835   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:14:44.213218   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:13:52 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:14:44.213251   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:14:44.213403   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:14:44.213591   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:14:44.213764   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:14:44.213928   75365 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:14:44.296667   75365 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0828 18:14:44.362609   75365 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0828 18:14:44.439040   75365 main.go:141] libmachine: Stopping "default-k8s-diff-port-640552"...
	I0828 18:14:44.439070   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:14:44.440920   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Stop
	I0828 18:14:44.444792   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 0/120
	I0828 18:14:45.446267   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 1/120
	I0828 18:14:46.448676   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 2/120
	I0828 18:14:47.450100   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 3/120
	I0828 18:14:48.452070   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 4/120
	I0828 18:14:49.454257   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 5/120
	I0828 18:14:50.455723   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 6/120
	I0828 18:14:51.456937   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 7/120
	I0828 18:14:52.458448   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 8/120
	I0828 18:14:53.459947   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 9/120
	I0828 18:14:54.462173   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 10/120
	I0828 18:14:55.463313   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 11/120
	I0828 18:14:56.464628   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 12/120
	I0828 18:14:57.465923   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 13/120
	I0828 18:14:58.467251   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 14/120
	I0828 18:14:59.469181   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 15/120
	I0828 18:15:00.470648   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 16/120
	I0828 18:15:01.471952   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 17/120
	I0828 18:15:02.473294   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 18/120
	I0828 18:15:03.474660   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 19/120
	I0828 18:15:04.476794   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 20/120
	I0828 18:15:05.478256   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 21/120
	I0828 18:15:06.479622   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 22/120
	I0828 18:15:07.481208   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 23/120
	I0828 18:15:08.482572   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 24/120
	I0828 18:15:09.484547   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 25/120
	I0828 18:15:10.485790   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 26/120
	I0828 18:15:11.487157   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 27/120
	I0828 18:15:12.488554   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 28/120
	I0828 18:15:13.489943   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 29/120
	I0828 18:15:14.492485   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 30/120
	I0828 18:15:15.493749   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 31/120
	I0828 18:15:16.495326   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 32/120
	I0828 18:15:17.496699   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 33/120
	I0828 18:15:18.498110   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 34/120
	I0828 18:15:19.500217   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 35/120
	I0828 18:15:20.501527   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 36/120
	I0828 18:15:21.503112   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 37/120
	I0828 18:15:22.504520   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 38/120
	I0828 18:15:23.505814   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 39/120
	I0828 18:15:24.508016   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 40/120
	I0828 18:15:25.509337   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 41/120
	I0828 18:15:26.510880   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 42/120
	I0828 18:15:27.512356   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 43/120
	I0828 18:15:28.513915   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 44/120
	I0828 18:15:29.515951   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 45/120
	I0828 18:15:30.517147   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 46/120
	I0828 18:15:31.518805   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 47/120
	I0828 18:15:32.520153   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 48/120
	I0828 18:15:33.521396   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 49/120
	I0828 18:15:34.523427   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 50/120
	I0828 18:15:35.524584   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 51/120
	I0828 18:15:36.526749   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 52/120
	I0828 18:15:37.528301   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 53/120
	I0828 18:15:38.529542   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 54/120
	I0828 18:15:39.531409   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 55/120
	I0828 18:15:40.532585   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 56/120
	I0828 18:15:41.533828   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 57/120
	I0828 18:15:42.535231   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 58/120
	I0828 18:15:43.536515   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 59/120
	I0828 18:15:44.538858   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 60/120
	I0828 18:15:45.540643   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 61/120
	I0828 18:15:46.541982   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 62/120
	I0828 18:15:47.543365   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 63/120
	I0828 18:15:48.544869   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 64/120
	I0828 18:15:49.546752   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 65/120
	I0828 18:15:50.548833   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 66/120
	I0828 18:15:51.550246   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 67/120
	I0828 18:15:52.551766   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 68/120
	I0828 18:15:53.553127   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 69/120
	I0828 18:15:54.555294   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 70/120
	I0828 18:15:55.556614   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 71/120
	I0828 18:15:56.558132   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 72/120
	I0828 18:15:57.559589   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 73/120
	I0828 18:15:58.561024   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 74/120
	I0828 18:15:59.563136   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 75/120
	I0828 18:16:00.564547   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 76/120
	I0828 18:16:01.565815   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 77/120
	I0828 18:16:02.567349   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 78/120
	I0828 18:16:03.568584   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 79/120
	I0828 18:16:04.570909   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 80/120
	I0828 18:16:05.572484   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 81/120
	I0828 18:16:06.573997   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 82/120
	I0828 18:16:07.575478   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 83/120
	I0828 18:16:08.576965   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 84/120
	I0828 18:16:09.579048   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 85/120
	I0828 18:16:10.580615   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 86/120
	I0828 18:16:11.582257   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 87/120
	I0828 18:16:12.584551   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 88/120
	I0828 18:16:13.585823   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 89/120
	I0828 18:16:14.587959   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 90/120
	I0828 18:16:15.589260   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 91/120
	I0828 18:16:16.590752   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 92/120
	I0828 18:16:17.592268   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 93/120
	I0828 18:16:18.593446   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 94/120
	I0828 18:16:19.595478   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 95/120
	I0828 18:16:20.596821   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 96/120
	I0828 18:16:21.598273   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 97/120
	I0828 18:16:22.599586   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 98/120
	I0828 18:16:23.601055   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 99/120
	I0828 18:16:24.602994   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 100/120
	I0828 18:16:25.604572   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 101/120
	I0828 18:16:26.605863   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 102/120
	I0828 18:16:27.607225   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 103/120
	I0828 18:16:28.608870   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 104/120
	I0828 18:16:29.611094   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 105/120
	I0828 18:16:30.612415   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 106/120
	I0828 18:16:31.613763   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 107/120
	I0828 18:16:32.615090   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 108/120
	I0828 18:16:33.616386   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 109/120
	I0828 18:16:34.618446   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 110/120
	I0828 18:16:35.620518   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 111/120
	I0828 18:16:36.621963   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 112/120
	I0828 18:16:37.623267   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 113/120
	I0828 18:16:38.624696   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 114/120
	I0828 18:16:39.627006   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 115/120
	I0828 18:16:40.628514   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 116/120
	I0828 18:16:41.629743   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 117/120
	I0828 18:16:42.630977   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 118/120
	I0828 18:16:43.632761   75365 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for machine to stop 119/120
	I0828 18:16:44.634182   75365 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0828 18:16:44.634231   75365 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0828 18:16:44.635875   75365 out.go:201] 
	W0828 18:16:44.637061   75365 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0828 18:16:44.637077   75365 out.go:270] * 
	* 
	W0828 18:16:44.639618   75365 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:16:44.640929   75365 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-640552 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552
E0828 18:16:44.650611   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:55.397462   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:55.403880   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:55.415239   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:55.436601   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:55.478038   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:55.559478   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:55.721019   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:56.042679   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:56.684746   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:57.966707   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:00.529050   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552: exit status 3 (18.662655235s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:17:03.306402   76110 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	E0828 18:17:03.306420   76110 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-640552" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-072854 -n no-preload-072854
E0828 18:16:24.169306   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:16:25.840022   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-072854 -n no-preload-072854: exit status 3 (3.167547194s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:16:27.306447   75780 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.138:22: connect: no route to host
	E0828 18:16:27.306470   75780 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.138:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-072854 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-072854 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152481011s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.138:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-072854 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-072854 -n no-preload-072854
E0828 18:16:35.025646   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-072854 -n no-preload-072854: exit status 3 (3.063417801s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:16:36.522450   75862 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.138:22: connect: no route to host
	E0828 18:16:36.522475   75862 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.138:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-072854" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-131737 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-131737 create -f testdata/busybox.yaml: exit status 1 (42.55011ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-131737" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-131737 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 6 (210.817787ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:16:44.375695   76037 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-131737" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-131737" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 6 (208.341595ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:16:44.583405   76067 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-131737" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-131737" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-131737 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-131737 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m55.848430621s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-131737 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-131737 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-131737 describe deploy/metrics-server -n kube-system: exit status 1 (43.526203ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-131737" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-131737 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 6 (212.703678ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:18:40.688320   77261 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-131737" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-131737" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-014980 -n embed-certs-014980
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-014980 -n embed-certs-014980: exit status 3 (3.167752855s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:17:04.170448   76222 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host
	E0828 18:17:04.170470   76222 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-014980 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0828 18:17:05.320392   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:05.650324   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-014980 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15404256s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-014980 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-014980 -n embed-certs-014980
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-014980 -n embed-certs-014980: exit status 3 (3.061814953s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:17:13.386435   76358 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host
	E0828 18:17:13.386455   76358 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.130:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-014980" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552: exit status 3 (3.168195619s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:17:06.474482   76257 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	E0828 18:17:06.474505   76257 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-640552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0828 18:17:06.802201   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-640552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152966863s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-640552 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552: exit status 3 (3.066693228s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0828 18:17:15.694397   76405 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host
	E0828 18:17:15.694423   76405 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.226:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-640552" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (701.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-131737 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0828 18:18:47.535237   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:50.971056   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:51.163744   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:56.375067   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:19:18.867248   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:19:21.459809   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:19:23.524747   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:19:31.932559   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:19:39.259374   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:19:49.161978   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:20:18.297700   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:20:44.864795   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:20:46.597092   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:20:53.853926   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:21:03.673926   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:21:12.566674   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:21:31.377328   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:21:55.397501   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:22:23.101221   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:22:34.435380   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:23:00.239777   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:23:02.140059   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:23:09.994070   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:23:37.695545   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:23:51.163884   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:24:21.459342   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:24:23.310532   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:24:23.523808   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:25:44.864614   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:26:03.673871   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-131737 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m37.787058634s)

                                                
                                                
-- stdout --
	* [old-k8s-version-131737] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-131737" primary control-plane node in "old-k8s-version-131737" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-131737" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:18:45.197319   77396 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:18:45.197606   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197616   77396 out.go:358] Setting ErrFile to fd 2...
	I0828 18:18:45.197621   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197793   77396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:18:45.198351   77396 out.go:352] Setting JSON to false
	I0828 18:18:45.199218   77396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7271,"bootTime":1724861854,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:18:45.199316   77396 start.go:139] virtualization: kvm guest
	I0828 18:18:45.201168   77396 out.go:177] * [old-k8s-version-131737] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:18:45.202252   77396 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:18:45.202312   77396 notify.go:220] Checking for updates...
	I0828 18:18:45.204563   77396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:18:45.205713   77396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:18:45.206652   77396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:18:45.207806   77396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:18:45.208891   77396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:18:45.210308   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:18:45.210717   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.210780   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.225409   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0828 18:18:45.225806   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.226318   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.226338   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.226722   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.226903   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.228685   77396 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 18:18:45.229863   77396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:18:45.230199   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.230243   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.245150   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0828 18:18:45.245641   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.246164   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.246199   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.246486   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.246677   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.282499   77396 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 18:18:45.283789   77396 start.go:297] selected driver: kvm2
	I0828 18:18:45.283804   77396 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.283918   77396 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:18:45.284594   77396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.284693   77396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:18:45.299887   77396 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:18:45.300236   77396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:18:45.300266   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:18:45.300274   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:18:45.300308   77396 start.go:340] cluster config:
	{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.300419   77396 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.302883   77396 out.go:177] * Starting "old-k8s-version-131737" primary control-plane node in "old-k8s-version-131737" cluster
	I0828 18:18:45.304152   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:18:45.304189   77396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:18:45.304208   77396 cache.go:56] Caching tarball of preloaded images
	I0828 18:18:45.304295   77396 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:18:45.304305   77396 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0828 18:18:45.304426   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:18:45.304608   77396 start.go:360] acquireMachinesLock for old-k8s-version-131737: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:21:52.286982   77396 start.go:364] duration metric: took 3m6.98234152s to acquireMachinesLock for "old-k8s-version-131737"
	I0828 18:21:52.287057   77396 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:52.287069   77396 fix.go:54] fixHost starting: 
	I0828 18:21:52.287554   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:52.287595   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:52.305954   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0828 18:21:52.306439   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:52.306908   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:21:52.306928   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:52.307228   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:52.307404   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:21:52.307571   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetState
	I0828 18:21:52.309284   77396 fix.go:112] recreateIfNeeded on old-k8s-version-131737: state=Stopped err=<nil>
	I0828 18:21:52.309322   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	W0828 18:21:52.309508   77396 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:52.311369   77396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-131737" ...
	I0828 18:21:52.312648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .Start
	I0828 18:21:52.312862   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring networks are active...
	I0828 18:21:52.313682   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network default is active
	I0828 18:21:52.314112   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network mk-old-k8s-version-131737 is active
	I0828 18:21:52.314488   77396 main.go:141] libmachine: (old-k8s-version-131737) Getting domain xml...
	I0828 18:21:52.315180   77396 main.go:141] libmachine: (old-k8s-version-131737) Creating domain...
	I0828 18:21:53.582013   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting to get IP...
	I0828 18:21:53.583124   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.583609   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.583672   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.583582   78246 retry.go:31] will retry after 289.679773ms: waiting for machine to come up
	I0828 18:21:53.875299   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.876115   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.876144   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.876051   78246 retry.go:31] will retry after 263.317798ms: waiting for machine to come up
	I0828 18:21:54.141733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.142310   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.142340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.142257   78246 retry.go:31] will retry after 440.224905ms: waiting for machine to come up
	I0828 18:21:54.584505   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.585061   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.585084   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.585018   78246 retry.go:31] will retry after 379.546405ms: waiting for machine to come up
	I0828 18:21:54.966516   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.967130   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.967153   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.967045   78246 retry.go:31] will retry after 754.463377ms: waiting for machine to come up
	I0828 18:21:55.723533   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:55.724021   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:55.724042   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:55.723980   78246 retry.go:31] will retry after 607.743145ms: waiting for machine to come up
	I0828 18:21:56.333733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:56.334181   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:56.334210   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:56.334135   78246 retry.go:31] will retry after 1.098394488s: waiting for machine to come up
	I0828 18:21:57.433729   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:57.434212   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:57.434243   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:57.434157   78246 retry.go:31] will retry after 1.195993343s: waiting for machine to come up
	I0828 18:21:58.631451   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:58.631839   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:58.631867   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:58.631798   78246 retry.go:31] will retry after 1.807712472s: waiting for machine to come up
	I0828 18:22:00.441679   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:00.442149   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:00.442178   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:00.442063   78246 retry.go:31] will retry after 2.175897132s: waiting for machine to come up
	I0828 18:22:02.620076   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:02.620562   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:02.620589   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:02.620527   78246 retry.go:31] will retry after 1.749248103s: waiting for machine to come up
	I0828 18:22:04.371390   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:04.371924   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:04.371969   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:04.371875   78246 retry.go:31] will retry after 2.412168623s: waiting for machine to come up
	I0828 18:22:06.787073   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:06.787468   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:06.787506   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:06.787418   78246 retry.go:31] will retry after 3.844761666s: waiting for machine to come up
	I0828 18:22:10.635599   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.635992   77396 main.go:141] libmachine: (old-k8s-version-131737) Found IP for machine: 192.168.50.99
	I0828 18:22:10.636017   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserving static IP address...
	I0828 18:22:10.636035   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has current primary IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.636476   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserved static IP address: 192.168.50.99
	I0828 18:22:10.636507   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting for SSH to be available...
	I0828 18:22:10.636529   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.636550   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | skip adding static IP to network mk-old-k8s-version-131737 - found existing host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"}
	I0828 18:22:10.636565   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Getting to WaitForSSH function...
	I0828 18:22:10.638762   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639118   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.639150   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639274   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH client type: external
	I0828 18:22:10.639295   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa (-rw-------)
	I0828 18:22:10.639324   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:10.639340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | About to run SSH command:
	I0828 18:22:10.639368   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | exit 0
	I0828 18:22:10.765932   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:10.766339   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetConfigRaw
	I0828 18:22:10.767003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:10.769525   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770006   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.770045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770184   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:22:10.770396   77396 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:10.770418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:10.770671   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.772685   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773010   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.773031   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773182   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.773396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773583   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773739   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.773904   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.774112   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.774125   77396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:10.874115   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:10.874150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874366   77396 buildroot.go:166] provisioning hostname "old-k8s-version-131737"
	I0828 18:22:10.874396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874600   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.876804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877106   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.877132   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877237   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.877445   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877604   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877763   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.877921   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.878123   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.878139   77396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-131737 && echo "old-k8s-version-131737" | sudo tee /etc/hostname
	I0828 18:22:10.999107   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-131737
	
	I0828 18:22:10.999144   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.002327   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.002771   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.002802   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.003036   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.003221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003425   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003610   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.003769   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.003968   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.003986   77396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-131737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-131737/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-131737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:11.119461   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:11.119493   77396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:11.119523   77396 buildroot.go:174] setting up certificates
	I0828 18:22:11.119535   77396 provision.go:84] configureAuth start
	I0828 18:22:11.119547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:11.119813   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.122564   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.122916   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.122945   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.123121   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.125575   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.125946   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.125973   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.126103   77396 provision.go:143] copyHostCerts
	I0828 18:22:11.126169   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:11.126192   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:11.126258   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:11.126390   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:11.126416   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:11.126453   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:11.126551   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:11.126565   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:11.126596   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:11.126678   77396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-131737 san=[127.0.0.1 192.168.50.99 localhost minikube old-k8s-version-131737]
	I0828 18:22:11.382096   77396 provision.go:177] copyRemoteCerts
	I0828 18:22:11.382161   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:11.382189   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.384698   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.385071   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.385394   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.385527   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.385669   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.463818   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:11.487677   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0828 18:22:11.510454   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 18:22:11.532302   77396 provision.go:87] duration metric: took 412.75597ms to configureAuth
	I0828 18:22:11.532331   77396 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:11.532551   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:22:11.532627   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.535284   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535668   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.535700   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535816   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.536003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536138   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536317   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.536444   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.536599   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.536626   77396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:11.757267   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:11.757297   77396 machine.go:96] duration metric: took 986.887935ms to provisionDockerMachine
	I0828 18:22:11.757311   77396 start.go:293] postStartSetup for "old-k8s-version-131737" (driver="kvm2")
	I0828 18:22:11.757325   77396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:11.757341   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.757701   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:11.757761   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.760433   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760764   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.760804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760949   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.761117   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.761288   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.761467   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.842091   77396 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:11.846271   77396 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:11.846294   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:11.846357   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:11.846452   77396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:11.846590   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:11.856373   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:11.879153   77396 start.go:296] duration metric: took 121.830018ms for postStartSetup
	I0828 18:22:11.879193   77396 fix.go:56] duration metric: took 19.592124568s for fixHost
	I0828 18:22:11.879218   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.882110   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882588   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.882638   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882814   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.883017   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883241   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883383   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.883540   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.883704   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.883715   77396 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:11.990532   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869331.947970723
	
	I0828 18:22:11.990563   77396 fix.go:216] guest clock: 1724869331.947970723
	I0828 18:22:11.990574   77396 fix.go:229] Guest: 2024-08-28 18:22:11.947970723 +0000 UTC Remote: 2024-08-28 18:22:11.879198847 +0000 UTC m=+206.714077766 (delta=68.771876ms)
	I0828 18:22:11.990599   77396 fix.go:200] guest clock delta is within tolerance: 68.771876ms
	I0828 18:22:11.990605   77396 start.go:83] releasing machines lock for "old-k8s-version-131737", held for 19.703582254s
	I0828 18:22:11.990648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.990935   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.993283   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993690   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.993725   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993908   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994630   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994718   77396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:11.994768   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.994836   77396 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:11.994864   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.997521   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997693   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997952   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.997974   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998001   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.998022   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998251   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998384   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998466   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998650   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998665   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.998813   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:12.079201   77396 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:12.116862   77396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:12.268437   77396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:12.274689   77396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:12.274768   77396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:12.299532   77396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:12.299561   77396 start.go:495] detecting cgroup driver to use...
	I0828 18:22:12.299633   77396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:12.321322   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:12.336273   77396 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:12.336345   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:12.350625   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:12.364155   77396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:12.475639   77396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:12.636052   77396 docker.go:233] disabling docker service ...
	I0828 18:22:12.636144   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:12.655431   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:12.673744   77396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:12.865232   77396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:12.993530   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:13.006666   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:13.023529   77396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0828 18:22:13.023617   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.032944   77396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:13.033014   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.042494   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.052172   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.062869   77396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:13.073254   77396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:13.081968   77396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:13.082032   77396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:13.096163   77396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:13.106942   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:13.229752   77396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:13.333809   77396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:13.333870   77396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:13.339539   77396 start.go:563] Will wait 60s for crictl version
	I0828 18:22:13.339615   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:13.343618   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:13.387552   77396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:13.387647   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.417440   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.451222   77396 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0828 18:22:13.452432   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:13.455750   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456127   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:13.456158   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456465   77396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:13.460719   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:13.474168   77396 kubeadm.go:883] updating cluster {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:13.474315   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:22:13.474381   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:13.519869   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:13.519940   77396 ssh_runner.go:195] Run: which lz4
	I0828 18:22:13.524479   77396 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:22:13.528475   77396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:22:13.528511   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0828 18:22:15.039582   77396 crio.go:462] duration metric: took 1.515144029s to copy over tarball
	I0828 18:22:15.039666   77396 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:22:18.094470   77396 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.054779864s)
	I0828 18:22:18.094500   77396 crio.go:469] duration metric: took 3.054883651s to extract the tarball
	I0828 18:22:18.094507   77396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:22:18.138235   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:18.172461   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:18.172484   77396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:18.172527   77396 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.172572   77396 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.172589   77396 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.172646   77396 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0828 18:22:18.172819   77396 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.172608   77396 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.172823   77396 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.172990   77396 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174545   77396 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.174579   77396 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.174598   77396 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0828 18:22:18.174609   77396 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.174904   77396 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.415540   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0828 18:22:18.461528   77396 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0828 18:22:18.461577   77396 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0828 18:22:18.461617   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.466065   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.471602   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.476041   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.480111   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.484307   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.500185   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.519236   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.538341   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.614022   77396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0828 18:22:18.614068   77396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.614150   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649875   77396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0828 18:22:18.649927   77396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.649945   77396 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0828 18:22:18.649976   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649980   77396 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.650035   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.665128   77396 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0828 18:22:18.665173   77396 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.665225   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686246   77396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0828 18:22:18.686288   77396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.686303   77396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0828 18:22:18.686336   77396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.686375   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686417   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.686339   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686483   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.686527   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.686558   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.686599   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775824   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775875   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.803911   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.803983   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0828 18:22:18.822129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.822230   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.822232   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.912309   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.912514   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.912662   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:19.003169   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003183   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0828 18:22:19.003201   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:19.003137   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:19.003292   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:19.108957   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0828 18:22:19.109000   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0828 18:22:19.109047   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0828 18:22:19.108961   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0828 18:22:19.109144   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0828 18:22:19.340554   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:19.486655   77396 cache_images.go:92] duration metric: took 1.314154463s to LoadCachedImages
	W0828 18:22:19.486742   77396 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0828 18:22:19.486760   77396 kubeadm.go:934] updating node { 192.168.50.99 8443 v1.20.0 crio true true} ...
	I0828 18:22:19.486898   77396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-131737 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:19.486979   77396 ssh_runner.go:195] Run: crio config
	I0828 18:22:19.530549   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:22:19.530579   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:19.530592   77396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:19.530621   77396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.99 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-131737 NodeName:old-k8s-version-131737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0828 18:22:19.530797   77396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-131737"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:19.530870   77396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0828 18:22:19.545081   77396 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:19.545179   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:19.558002   77396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0828 18:22:19.577056   77396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:19.595848   77396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0828 18:22:19.614164   77396 ssh_runner.go:195] Run: grep 192.168.50.99	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:19.618274   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.99	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:19.631776   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:19.775809   77396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:19.793491   77396 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737 for IP: 192.168.50.99
	I0828 18:22:19.793521   77396 certs.go:194] generating shared ca certs ...
	I0828 18:22:19.793544   77396 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:19.793722   77396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:19.793776   77396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:19.793788   77396 certs.go:256] generating profile certs ...
	I0828 18:22:19.793928   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.key
	I0828 18:22:19.793993   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key.131f8aa0
	I0828 18:22:19.794043   77396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key
	I0828 18:22:19.794211   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:19.794279   77396 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:19.794292   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:19.794322   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:19.794353   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:19.794379   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:19.794447   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:19.795621   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:19.831614   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:19.874281   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:19.927912   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:19.967892   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 18:22:20.010378   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:22:20.036730   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:20.064707   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:22:20.089246   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:20.116913   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:20.151729   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:20.174509   77396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:20.190911   77396 ssh_runner.go:195] Run: openssl version
	I0828 18:22:20.198369   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:20.208787   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213735   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213798   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.219855   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:20.230970   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:20.243428   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248105   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248169   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.253803   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:20.264495   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:20.275530   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280118   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280179   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.286135   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:20.296995   77396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:20.302843   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:20.309214   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:20.314977   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:20.321177   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:20.327689   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:20.334176   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:20.340478   77396 kubeadm.go:392] StartCluster: {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:20.340589   77396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:20.340666   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.377288   77396 cri.go:89] found id: ""
	I0828 18:22:20.377366   77396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:20.387774   77396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:20.387796   77396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:20.387846   77396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:20.398086   77396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:20.399369   77396 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-131737" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:20.400118   77396 kubeconfig.go:62] /home/jenkins/minikube-integration/19529-10317/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-131737" cluster setting kubeconfig missing "old-k8s-version-131737" context setting]
	I0828 18:22:20.401248   77396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:20.464577   77396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:20.475116   77396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.99
	I0828 18:22:20.475161   77396 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:20.475172   77396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:20.475233   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.509801   77396 cri.go:89] found id: ""
	I0828 18:22:20.509881   77396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:20.527245   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:20.537526   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:20.537548   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:20.537603   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:20.546096   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:20.546168   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:20.555608   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:20.564344   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:20.564405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:20.573551   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.582191   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:20.582248   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.592105   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:20.601563   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:20.601624   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:20.612220   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:20.621113   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:20.738800   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.351223   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.564678   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.659764   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.748789   77396 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:21.748886   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.249370   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.749578   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.249982   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.749304   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.249774   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.749363   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.249675   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.749573   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.249942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.249956   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.749065   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.249309   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.749697   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.249151   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.749206   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:30.249883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:30.749652   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.249973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.249415   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.749545   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.249768   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.749104   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.249819   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.749727   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:35.249587   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:35.749826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.249647   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.749792   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.249845   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.249577   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.749412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.249047   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.749564   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:40.249307   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:40.749120   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.249107   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.749895   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.249941   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.748952   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.249788   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.749898   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.249654   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.749350   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:45.249353   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:45.749091   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.249897   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.748991   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.249385   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.749204   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.248962   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.749853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.249574   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.749028   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:50.249726   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:50.749045   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.249609   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.749060   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.249827   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.748985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.248958   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.748960   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.249581   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.749175   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:55.248933   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:55.749502   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.249976   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.749648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.249544   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.749769   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.249492   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.749787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.249693   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.749781   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.249249   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.749724   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.248973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.748932   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.249474   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.749966   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.249404   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.248943   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.749828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.249882   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.749888   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.249648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.749518   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.249032   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.249738   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.749748   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.249670   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.749246   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:10.249340   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:10.749798   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.249721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.249779   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.249760   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.749029   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.249441   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.749641   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:15.249678   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:15.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.249786   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.748968   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.249139   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.749721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.249749   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.749731   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.249576   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.749644   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:20.249682   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:20.748965   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.249378   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.749011   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:21.749077   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:21.783557   77396 cri.go:89] found id: ""
	I0828 18:23:21.783581   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.783592   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:21.783600   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:21.783667   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:21.816332   77396 cri.go:89] found id: ""
	I0828 18:23:21.816366   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.816377   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:21.816385   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:21.816451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:21.850130   77396 cri.go:89] found id: ""
	I0828 18:23:21.850157   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.850168   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:21.850175   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:21.850240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:21.887000   77396 cri.go:89] found id: ""
	I0828 18:23:21.887028   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.887037   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:21.887045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:21.887106   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:21.922052   77396 cri.go:89] found id: ""
	I0828 18:23:21.922095   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.922106   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:21.922114   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:21.922169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:21.968838   77396 cri.go:89] found id: ""
	I0828 18:23:21.968865   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.968872   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:21.968879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:21.968937   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:22.005361   77396 cri.go:89] found id: ""
	I0828 18:23:22.005387   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.005397   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:22.005404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:22.005465   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:22.043999   77396 cri.go:89] found id: ""
	I0828 18:23:22.044026   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.044034   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:22.044042   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:22.044054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:22.092612   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:22.092641   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:22.105847   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:22.105870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:22.230236   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:22.230254   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:22.230267   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:22.305648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:22.305712   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:24.843524   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:24.856321   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:24.856412   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:24.891356   77396 cri.go:89] found id: ""
	I0828 18:23:24.891395   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.891406   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:24.891414   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:24.891476   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:24.923476   77396 cri.go:89] found id: ""
	I0828 18:23:24.923504   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.923515   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:24.923522   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:24.923583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:24.955453   77396 cri.go:89] found id: ""
	I0828 18:23:24.955482   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.955493   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:24.955499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:24.955564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:24.991349   77396 cri.go:89] found id: ""
	I0828 18:23:24.991377   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.991384   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:24.991394   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:24.991448   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:25.026464   77396 cri.go:89] found id: ""
	I0828 18:23:25.026493   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.026501   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:25.026508   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:25.026559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:25.066989   77396 cri.go:89] found id: ""
	I0828 18:23:25.067021   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.067045   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:25.067053   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:25.067123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:25.111327   77396 cri.go:89] found id: ""
	I0828 18:23:25.111358   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.111369   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:25.111377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:25.111442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:25.159672   77396 cri.go:89] found id: ""
	I0828 18:23:25.159698   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.159707   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:25.159715   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:25.159726   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:25.216755   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:25.216788   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:25.230365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:25.230399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:25.303227   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:25.303253   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:25.303276   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:25.378467   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:25.378501   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:27.915420   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:27.927659   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:27.927726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:27.961535   77396 cri.go:89] found id: ""
	I0828 18:23:27.961560   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.961568   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:27.961573   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:27.961618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:27.993707   77396 cri.go:89] found id: ""
	I0828 18:23:27.993732   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.993739   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:27.993745   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:27.993792   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:28.027410   77396 cri.go:89] found id: ""
	I0828 18:23:28.027438   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.027445   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:28.027451   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:28.027509   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:28.063874   77396 cri.go:89] found id: ""
	I0828 18:23:28.063909   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.063918   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:28.063924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:28.063974   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:28.096726   77396 cri.go:89] found id: ""
	I0828 18:23:28.096755   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.096763   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:28.096769   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:28.096826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:28.129538   77396 cri.go:89] found id: ""
	I0828 18:23:28.129562   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.129570   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:28.129576   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:28.129633   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:28.167785   77396 cri.go:89] found id: ""
	I0828 18:23:28.167813   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.167821   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:28.167827   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:28.167881   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:28.200417   77396 cri.go:89] found id: ""
	I0828 18:23:28.200445   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.200456   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:28.200467   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:28.200481   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:28.214025   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:28.214054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:28.280106   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:28.280126   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:28.280139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:28.359834   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:28.359875   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:28.399997   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:28.400028   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:30.950870   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:30.967367   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:30.967426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:31.007843   77396 cri.go:89] found id: ""
	I0828 18:23:31.007873   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.007882   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:31.007890   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:31.007949   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:31.056710   77396 cri.go:89] found id: ""
	I0828 18:23:31.056744   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.056756   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:31.056764   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:31.056824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:31.101177   77396 cri.go:89] found id: ""
	I0828 18:23:31.101208   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.101218   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:31.101225   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:31.101283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:31.135513   77396 cri.go:89] found id: ""
	I0828 18:23:31.135548   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.135560   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:31.135568   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:31.135635   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:31.172887   77396 cri.go:89] found id: ""
	I0828 18:23:31.172921   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.172932   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:31.172939   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:31.173006   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:31.207744   77396 cri.go:89] found id: ""
	I0828 18:23:31.207775   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.207788   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:31.207795   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:31.207873   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:31.242954   77396 cri.go:89] found id: ""
	I0828 18:23:31.242984   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.242995   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:31.243003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:31.243063   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:31.277382   77396 cri.go:89] found id: ""
	I0828 18:23:31.277418   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.277427   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:31.277436   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:31.277448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.315688   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:31.315722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:31.367565   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:31.367596   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:31.380803   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:31.380839   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:31.447184   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:31.447214   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:31.447229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.022521   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:34.036551   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:34.036615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:34.074735   77396 cri.go:89] found id: ""
	I0828 18:23:34.074763   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.074772   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:34.074780   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:34.074836   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:34.113604   77396 cri.go:89] found id: ""
	I0828 18:23:34.113631   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.113642   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:34.113649   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:34.113711   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:34.152658   77396 cri.go:89] found id: ""
	I0828 18:23:34.152687   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.152701   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:34.152707   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:34.152753   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:34.188748   77396 cri.go:89] found id: ""
	I0828 18:23:34.188775   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.188784   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:34.188789   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:34.188847   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:34.221553   77396 cri.go:89] found id: ""
	I0828 18:23:34.221584   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.221595   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:34.221602   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:34.221666   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:34.257809   77396 cri.go:89] found id: ""
	I0828 18:23:34.257833   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.257843   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:34.257850   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:34.257935   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:34.291217   77396 cri.go:89] found id: ""
	I0828 18:23:34.291246   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.291253   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:34.291261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:34.291327   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:34.324084   77396 cri.go:89] found id: ""
	I0828 18:23:34.324114   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.324122   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:34.324133   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:34.324147   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:34.373802   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:34.373838   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:34.386779   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:34.386807   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:34.457396   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:34.457413   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:34.457428   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.531549   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:34.531590   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:37.068985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:37.083317   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:37.083383   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:37.117109   77396 cri.go:89] found id: ""
	I0828 18:23:37.117144   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.117156   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:37.117164   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:37.117225   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:37.150151   77396 cri.go:89] found id: ""
	I0828 18:23:37.150180   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.150189   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:37.150194   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:37.150249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:37.184263   77396 cri.go:89] found id: ""
	I0828 18:23:37.184289   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.184298   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:37.184303   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:37.184358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:37.214442   77396 cri.go:89] found id: ""
	I0828 18:23:37.214468   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.214476   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:37.214481   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:37.214545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:37.251690   77396 cri.go:89] found id: ""
	I0828 18:23:37.251723   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.251732   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:37.251738   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:37.251790   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:37.286900   77396 cri.go:89] found id: ""
	I0828 18:23:37.286929   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.286939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:37.286946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:37.287026   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:37.324010   77396 cri.go:89] found id: ""
	I0828 18:23:37.324039   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.324049   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:37.324057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:37.324114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:37.359723   77396 cri.go:89] found id: ""
	I0828 18:23:37.359777   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.359785   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:37.359813   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:37.359829   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:37.411363   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:37.411395   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:37.425078   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:37.425108   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:37.498351   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:37.498374   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:37.498399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:37.580149   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:37.580187   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:40.119822   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:40.134555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:40.134613   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:40.173129   77396 cri.go:89] found id: ""
	I0828 18:23:40.173156   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.173164   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:40.173170   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:40.173218   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:40.205445   77396 cri.go:89] found id: ""
	I0828 18:23:40.205470   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.205477   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:40.205482   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:40.205536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:40.237018   77396 cri.go:89] found id: ""
	I0828 18:23:40.237046   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.237057   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:40.237064   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:40.237124   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:40.271188   77396 cri.go:89] found id: ""
	I0828 18:23:40.271220   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.271232   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:40.271239   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:40.271302   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:40.304532   77396 cri.go:89] found id: ""
	I0828 18:23:40.304566   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.304577   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:40.304585   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:40.304652   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:40.338114   77396 cri.go:89] found id: ""
	I0828 18:23:40.338145   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.338156   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:40.338165   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:40.338227   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:40.370126   77396 cri.go:89] found id: ""
	I0828 18:23:40.370160   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.370176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:40.370184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:40.370247   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:40.406139   77396 cri.go:89] found id: ""
	I0828 18:23:40.406167   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.406176   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:40.406186   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:40.406201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:40.459364   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:40.459404   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:40.472467   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:40.472496   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:40.546389   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:40.546420   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:40.546438   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:40.628550   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:40.628586   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:43.170210   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:43.183441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:43.183516   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:43.215798   77396 cri.go:89] found id: ""
	I0828 18:23:43.215823   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.215834   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:43.215841   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:43.215905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:43.250001   77396 cri.go:89] found id: ""
	I0828 18:23:43.250027   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.250035   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:43.250041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:43.250110   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:43.284621   77396 cri.go:89] found id: ""
	I0828 18:23:43.284654   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.284662   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:43.284668   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:43.284716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:43.318780   77396 cri.go:89] found id: ""
	I0828 18:23:43.318805   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.318815   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:43.318821   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:43.318866   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:43.351788   77396 cri.go:89] found id: ""
	I0828 18:23:43.351810   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.351818   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:43.351823   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:43.351872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:43.388719   77396 cri.go:89] found id: ""
	I0828 18:23:43.388745   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.388755   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:43.388761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:43.388810   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:43.423250   77396 cri.go:89] found id: ""
	I0828 18:23:43.423273   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.423283   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:43.423290   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:43.423376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:43.464644   77396 cri.go:89] found id: ""
	I0828 18:23:43.464672   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.464683   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:43.464693   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:43.464708   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:43.517422   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:43.517457   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:43.530317   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:43.530342   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:43.599776   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:43.599795   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:43.599806   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:43.679377   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:43.679409   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:46.215985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:46.229564   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:46.229632   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:46.267425   77396 cri.go:89] found id: ""
	I0828 18:23:46.267453   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.267464   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:46.267472   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:46.267534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:46.302532   77396 cri.go:89] found id: ""
	I0828 18:23:46.302562   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.302573   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:46.302580   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:46.302645   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:46.338197   77396 cri.go:89] found id: ""
	I0828 18:23:46.338226   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.338237   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:46.338244   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:46.338305   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:46.371503   77396 cri.go:89] found id: ""
	I0828 18:23:46.371528   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.371535   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:46.371542   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:46.371606   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:46.406364   77396 cri.go:89] found id: ""
	I0828 18:23:46.406386   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.406399   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:46.406405   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:46.406451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:46.441519   77396 cri.go:89] found id: ""
	I0828 18:23:46.441547   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.441557   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:46.441565   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:46.441626   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:46.475413   77396 cri.go:89] found id: ""
	I0828 18:23:46.475445   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.475455   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:46.475465   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:46.475531   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:46.508722   77396 cri.go:89] found id: ""
	I0828 18:23:46.508752   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.508762   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:46.508772   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:46.508790   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:46.564737   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:46.564776   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:46.578833   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:46.578860   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:46.649533   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:46.649554   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:46.649566   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:46.725738   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:46.725780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.263052   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:49.275342   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:49.275403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:49.310092   77396 cri.go:89] found id: ""
	I0828 18:23:49.310121   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.310131   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:49.310138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:49.310200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:49.347624   77396 cri.go:89] found id: ""
	I0828 18:23:49.347649   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.347657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:49.347662   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:49.347708   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:49.383801   77396 cri.go:89] found id: ""
	I0828 18:23:49.383827   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.383834   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:49.383840   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:49.383889   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:49.420443   77396 cri.go:89] found id: ""
	I0828 18:23:49.420470   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.420478   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:49.420484   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:49.420536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:49.452225   77396 cri.go:89] found id: ""
	I0828 18:23:49.452247   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.452255   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:49.452260   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:49.452306   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:49.486137   77396 cri.go:89] found id: ""
	I0828 18:23:49.486164   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.486172   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:49.486178   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:49.486224   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:49.519081   77396 cri.go:89] found id: ""
	I0828 18:23:49.519115   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.519126   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:49.519137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:49.519199   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:49.552903   77396 cri.go:89] found id: ""
	I0828 18:23:49.552932   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.552940   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:49.552948   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:49.552962   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:49.623963   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:49.624000   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:49.624023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:49.700684   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:49.700722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.738241   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:49.738265   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:49.786941   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:49.786976   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:52.300380   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:52.314281   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:52.314347   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:52.348497   77396 cri.go:89] found id: ""
	I0828 18:23:52.348522   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.348532   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:52.348539   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:52.348605   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:52.382060   77396 cri.go:89] found id: ""
	I0828 18:23:52.382107   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.382119   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:52.382127   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:52.382242   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:52.414306   77396 cri.go:89] found id: ""
	I0828 18:23:52.414335   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.414348   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:52.414356   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:52.414424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:52.448965   77396 cri.go:89] found id: ""
	I0828 18:23:52.448995   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.449005   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:52.449012   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:52.449079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:52.479102   77396 cri.go:89] found id: ""
	I0828 18:23:52.479129   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.479140   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:52.479148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:52.479213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:52.510025   77396 cri.go:89] found id: ""
	I0828 18:23:52.510051   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.510061   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:52.510068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:52.510171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:52.544472   77396 cri.go:89] found id: ""
	I0828 18:23:52.544501   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.544510   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:52.544517   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:52.544584   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:52.579962   77396 cri.go:89] found id: ""
	I0828 18:23:52.579986   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.579993   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:52.580000   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:52.580015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:52.631775   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:52.631809   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:52.645200   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:52.645230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:52.709318   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:52.709341   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:52.709355   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:52.788797   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:52.788834   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:55.324787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:55.338003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:55.338109   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:55.371733   77396 cri.go:89] found id: ""
	I0828 18:23:55.371757   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.371764   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:55.371770   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:55.371818   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:55.407922   77396 cri.go:89] found id: ""
	I0828 18:23:55.407944   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.407951   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:55.407957   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:55.408009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:55.443667   77396 cri.go:89] found id: ""
	I0828 18:23:55.443693   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.443700   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:55.443706   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:55.443761   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:55.478692   77396 cri.go:89] found id: ""
	I0828 18:23:55.478725   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.478735   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:55.478742   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:55.478804   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:55.512495   77396 cri.go:89] found id: ""
	I0828 18:23:55.512517   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.512525   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:55.512530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:55.512583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:55.546363   77396 cri.go:89] found id: ""
	I0828 18:23:55.546404   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.546415   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:55.546423   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:55.546478   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:55.579505   77396 cri.go:89] found id: ""
	I0828 18:23:55.579526   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.579533   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:55.579539   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:55.579588   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:55.610588   77396 cri.go:89] found id: ""
	I0828 18:23:55.610612   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.610628   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:55.610648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:55.610659   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:55.647289   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:55.647313   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:55.696660   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:55.696699   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:55.709215   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:55.709242   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:55.781755   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:55.781773   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:55.781786   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.359553   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:58.371960   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:58.372034   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:58.404455   77396 cri.go:89] found id: ""
	I0828 18:23:58.404481   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.404488   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:58.404494   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:58.404545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:58.436955   77396 cri.go:89] found id: ""
	I0828 18:23:58.436979   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.436989   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:58.436996   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:58.437055   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:58.467985   77396 cri.go:89] found id: ""
	I0828 18:23:58.468011   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.468021   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:58.468028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:58.468085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:58.500356   77396 cri.go:89] found id: ""
	I0828 18:23:58.500390   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.500398   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:58.500404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:58.500469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:58.538445   77396 cri.go:89] found id: ""
	I0828 18:23:58.538469   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.538477   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:58.538483   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:58.538541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:58.577827   77396 cri.go:89] found id: ""
	I0828 18:23:58.577851   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.577859   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:58.577867   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:58.577932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:58.611863   77396 cri.go:89] found id: ""
	I0828 18:23:58.611891   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.611902   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:58.611909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:58.611973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:58.646133   77396 cri.go:89] found id: ""
	I0828 18:23:58.646165   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.646175   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:58.646187   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:58.646204   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:58.659103   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:58.659134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:58.725271   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:58.725292   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:58.725310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.807171   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:58.807218   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:58.848245   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:58.848273   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:01.402171   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:01.415498   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:01.415574   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:01.449314   77396 cri.go:89] found id: ""
	I0828 18:24:01.449347   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.449355   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:01.449362   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:01.449425   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:01.485354   77396 cri.go:89] found id: ""
	I0828 18:24:01.485381   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.485388   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:01.485395   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:01.485439   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:01.518106   77396 cri.go:89] found id: ""
	I0828 18:24:01.518132   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.518139   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:01.518145   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:01.518191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:01.551298   77396 cri.go:89] found id: ""
	I0828 18:24:01.551329   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.551340   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:01.551348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:01.551406   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:01.587074   77396 cri.go:89] found id: ""
	I0828 18:24:01.587100   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.587107   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:01.587112   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:01.587158   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:01.619482   77396 cri.go:89] found id: ""
	I0828 18:24:01.619510   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.619518   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:01.619523   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:01.619575   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:01.651938   77396 cri.go:89] found id: ""
	I0828 18:24:01.651965   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.651972   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:01.651978   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:01.652039   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:01.685390   77396 cri.go:89] found id: ""
	I0828 18:24:01.685419   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.685429   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:01.685437   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:01.685448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.723631   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:01.723656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:01.777387   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:01.777422   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:01.793748   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:01.793781   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:01.857869   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:01.857901   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:01.857915   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.434883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:04.447876   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:04.447953   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:04.480730   77396 cri.go:89] found id: ""
	I0828 18:24:04.480762   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.480774   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:04.480781   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:04.480841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:04.514621   77396 cri.go:89] found id: ""
	I0828 18:24:04.514647   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.514657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:04.514664   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:04.514722   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:04.552044   77396 cri.go:89] found id: ""
	I0828 18:24:04.552071   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.552083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:04.552090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:04.552151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:04.587402   77396 cri.go:89] found id: ""
	I0828 18:24:04.587427   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.587440   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:04.587446   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:04.587506   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:04.619299   77396 cri.go:89] found id: ""
	I0828 18:24:04.619329   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.619337   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:04.619343   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:04.619393   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:04.659363   77396 cri.go:89] found id: ""
	I0828 18:24:04.659391   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.659399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:04.659408   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:04.659469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:04.691997   77396 cri.go:89] found id: ""
	I0828 18:24:04.692022   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.692030   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:04.692035   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:04.692089   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:04.725162   77396 cri.go:89] found id: ""
	I0828 18:24:04.725188   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.725196   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:04.725204   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:04.725215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:04.778072   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:04.778112   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:04.792571   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:04.792604   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:04.863074   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:04.863096   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:04.863107   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.958480   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:04.958516   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:07.498048   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:07.511286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:07.511350   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:07.554880   77396 cri.go:89] found id: ""
	I0828 18:24:07.554910   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.554921   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:07.554929   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:07.554990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:07.590593   77396 cri.go:89] found id: ""
	I0828 18:24:07.590621   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.590631   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:07.590641   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:07.590706   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:07.624067   77396 cri.go:89] found id: ""
	I0828 18:24:07.624096   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.624107   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:07.624113   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:07.624169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:07.657241   77396 cri.go:89] found id: ""
	I0828 18:24:07.657269   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.657277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:07.657282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:07.657341   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:07.702308   77396 cri.go:89] found id: ""
	I0828 18:24:07.702358   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.702368   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:07.702375   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:07.702438   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:07.736409   77396 cri.go:89] found id: ""
	I0828 18:24:07.736446   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.736454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:07.736459   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:07.736527   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:07.771001   77396 cri.go:89] found id: ""
	I0828 18:24:07.771029   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.771037   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:07.771043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:07.771090   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:07.807061   77396 cri.go:89] found id: ""
	I0828 18:24:07.807089   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.807099   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:07.807111   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:07.807125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:07.885254   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:07.885293   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:07.926920   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:07.926948   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:07.980485   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:07.980524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:07.994512   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:07.994545   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:08.071058   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:10.571233   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:10.586227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:10.586298   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:10.623971   77396 cri.go:89] found id: ""
	I0828 18:24:10.623997   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.624006   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:10.624014   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:10.624074   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:10.675472   77396 cri.go:89] found id: ""
	I0828 18:24:10.675506   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.675518   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:10.675526   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:10.675599   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:10.707885   77396 cri.go:89] found id: ""
	I0828 18:24:10.707913   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.707922   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:10.707931   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:10.707991   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:10.740896   77396 cri.go:89] found id: ""
	I0828 18:24:10.740924   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.740934   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:10.740942   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:10.741058   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:10.776125   77396 cri.go:89] found id: ""
	I0828 18:24:10.776155   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.776167   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:10.776174   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:10.776234   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:10.814024   77396 cri.go:89] found id: ""
	I0828 18:24:10.814053   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.814062   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:10.814068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:10.814132   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:10.851380   77396 cri.go:89] found id: ""
	I0828 18:24:10.851404   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.851412   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:10.851418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:10.851479   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:10.888162   77396 cri.go:89] found id: ""
	I0828 18:24:10.888193   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.888204   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:10.888215   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:10.888229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:10.938481   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:10.938520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:10.952841   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:10.952870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:11.020956   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:11.020982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:11.020997   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:11.101883   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:11.101920   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:13.642878   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:13.657098   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:13.657172   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:13.695651   77396 cri.go:89] found id: ""
	I0828 18:24:13.695686   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.695694   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:13.695699   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:13.695747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:13.732419   77396 cri.go:89] found id: ""
	I0828 18:24:13.732452   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.732465   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:13.732473   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:13.732523   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:13.770052   77396 cri.go:89] found id: ""
	I0828 18:24:13.770090   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.770099   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:13.770104   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:13.770157   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:13.807955   77396 cri.go:89] found id: ""
	I0828 18:24:13.807980   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.807988   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:13.807993   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:13.808045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:13.849535   77396 cri.go:89] found id: ""
	I0828 18:24:13.849559   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.849566   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:13.849571   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:13.849621   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:13.889078   77396 cri.go:89] found id: ""
	I0828 18:24:13.889105   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.889114   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:13.889122   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:13.889177   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:13.924998   77396 cri.go:89] found id: ""
	I0828 18:24:13.925030   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.925040   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:13.925046   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:13.925095   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:13.962794   77396 cri.go:89] found id: ""
	I0828 18:24:13.962824   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.962835   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:13.962843   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:13.962854   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:14.016213   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:14.016260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:14.030089   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:14.030119   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:14.101102   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:14.101121   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:14.101134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:14.179243   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:14.179283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:16.725412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:16.738387   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:16.738459   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:16.773934   77396 cri.go:89] found id: ""
	I0828 18:24:16.773960   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.773967   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:16.773973   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:16.774022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:16.807374   77396 cri.go:89] found id: ""
	I0828 18:24:16.807402   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.807412   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:16.807418   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:16.807468   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:16.841569   77396 cri.go:89] found id: ""
	I0828 18:24:16.841595   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.841605   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:16.841613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:16.841673   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:16.877225   77396 cri.go:89] found id: ""
	I0828 18:24:16.877247   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.877255   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:16.877261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:16.877321   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:16.911357   77396 cri.go:89] found id: ""
	I0828 18:24:16.911385   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.911395   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:16.911402   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:16.911458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:16.955061   77396 cri.go:89] found id: ""
	I0828 18:24:16.955087   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.955095   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:16.955103   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:16.955156   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:16.989851   77396 cri.go:89] found id: ""
	I0828 18:24:16.989887   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.989900   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:16.989906   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:16.989966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:17.023974   77396 cri.go:89] found id: ""
	I0828 18:24:17.024005   77396 logs.go:276] 0 containers: []
	W0828 18:24:17.024016   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:17.024024   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:17.024036   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:17.085245   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:17.085279   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:17.100181   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:17.100211   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:17.185406   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:17.185426   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:17.185437   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:17.266980   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:17.267020   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:19.808568   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:19.823365   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:19.823432   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:19.859428   77396 cri.go:89] found id: ""
	I0828 18:24:19.859451   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.859459   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:19.859464   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:19.859518   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:19.895152   77396 cri.go:89] found id: ""
	I0828 18:24:19.895176   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.895186   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:19.895202   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:19.895263   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:19.935775   77396 cri.go:89] found id: ""
	I0828 18:24:19.935806   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.935815   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:19.935828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:19.935893   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:19.969484   77396 cri.go:89] found id: ""
	I0828 18:24:19.969518   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.969528   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:19.969534   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:19.969615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:20.002893   77396 cri.go:89] found id: ""
	I0828 18:24:20.002935   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.002947   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:20.002955   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:20.003041   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:20.034641   77396 cri.go:89] found id: ""
	I0828 18:24:20.034668   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.034678   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:20.034686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:20.034750   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:20.064580   77396 cri.go:89] found id: ""
	I0828 18:24:20.064609   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.064620   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:20.064627   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:20.064710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:20.109306   77396 cri.go:89] found id: ""
	I0828 18:24:20.109348   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.109360   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:20.109371   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:20.109390   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:20.160179   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:20.160213   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:20.172953   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:20.172982   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:20.245855   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:20.245879   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:20.245894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:20.333372   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:20.333430   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:22.870985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:22.886333   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:22.886403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:22.923248   77396 cri.go:89] found id: ""
	I0828 18:24:22.923278   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.923290   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:22.923298   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:22.923362   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:22.961720   77396 cri.go:89] found id: ""
	I0828 18:24:22.961747   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.961758   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:22.961767   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:22.961826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:22.996416   77396 cri.go:89] found id: ""
	I0828 18:24:22.996451   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.996461   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:22.996469   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:22.996534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:23.031328   77396 cri.go:89] found id: ""
	I0828 18:24:23.031354   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.031365   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:23.031373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:23.031442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:23.062790   77396 cri.go:89] found id: ""
	I0828 18:24:23.062818   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.062828   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:23.062836   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:23.062900   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:23.095783   77396 cri.go:89] found id: ""
	I0828 18:24:23.095811   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.095822   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:23.095829   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:23.095887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:23.128950   77396 cri.go:89] found id: ""
	I0828 18:24:23.128976   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.128984   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:23.128989   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:23.129035   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:23.161040   77396 cri.go:89] found id: ""
	I0828 18:24:23.161070   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.161081   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:23.161093   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:23.161109   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:23.209200   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:23.209232   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:23.222326   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:23.222369   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:23.294157   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:23.294223   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:23.294235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:23.371364   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:23.371399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:25.911853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:25.924909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:25.925042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:25.958257   77396 cri.go:89] found id: ""
	I0828 18:24:25.958286   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.958294   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:25.958300   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:25.958380   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:25.991284   77396 cri.go:89] found id: ""
	I0828 18:24:25.991312   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.991320   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:25.991325   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:25.991373   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:26.023932   77396 cri.go:89] found id: ""
	I0828 18:24:26.023963   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.023974   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:26.023981   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:26.024042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:26.055233   77396 cri.go:89] found id: ""
	I0828 18:24:26.055264   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.055274   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:26.055282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:26.055342   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:26.091307   77396 cri.go:89] found id: ""
	I0828 18:24:26.091334   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.091345   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:26.091353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:26.091403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:26.123887   77396 cri.go:89] found id: ""
	I0828 18:24:26.123919   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.123929   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:26.123943   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:26.124004   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:26.156028   77396 cri.go:89] found id: ""
	I0828 18:24:26.156055   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.156063   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:26.156068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:26.156129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:26.186952   77396 cri.go:89] found id: ""
	I0828 18:24:26.186981   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.186989   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:26.186998   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:26.187008   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:26.234021   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:26.234065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:26.249052   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:26.249079   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:26.323382   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:26.323406   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:26.323421   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:26.408279   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:26.408306   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:28.950242   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:28.964886   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:28.964973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:28.999657   77396 cri.go:89] found id: ""
	I0828 18:24:28.999686   77396 logs.go:276] 0 containers: []
	W0828 18:24:28.999695   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:28.999701   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:28.999759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:29.036649   77396 cri.go:89] found id: ""
	I0828 18:24:29.036682   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.036691   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:29.036697   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:29.036758   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:29.071048   77396 cri.go:89] found id: ""
	I0828 18:24:29.071073   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.071083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:29.071090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:29.071149   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:29.106377   77396 cri.go:89] found id: ""
	I0828 18:24:29.106412   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.106423   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:29.106430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:29.106494   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:29.141150   77396 cri.go:89] found id: ""
	I0828 18:24:29.141183   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.141192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:29.141198   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:29.141261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:29.175977   77396 cri.go:89] found id: ""
	I0828 18:24:29.176007   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.176015   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:29.176022   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:29.176085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:29.209684   77396 cri.go:89] found id: ""
	I0828 18:24:29.209714   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.209725   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:29.209732   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:29.209791   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:29.244105   77396 cri.go:89] found id: ""
	I0828 18:24:29.244133   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.244143   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:29.244153   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:29.244168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:29.304288   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:29.304326   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:29.319606   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:29.319636   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:29.389101   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:29.389123   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:29.389135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:29.474129   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:29.474168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:32.018867   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:32.032399   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:32.032467   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:32.066994   77396 cri.go:89] found id: ""
	I0828 18:24:32.067023   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.067032   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:32.067038   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:32.067094   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:32.102133   77396 cri.go:89] found id: ""
	I0828 18:24:32.102164   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.102176   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:32.102183   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:32.102237   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:32.136427   77396 cri.go:89] found id: ""
	I0828 18:24:32.136450   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.136457   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:32.136463   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:32.136514   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.169993   77396 cri.go:89] found id: ""
	I0828 18:24:32.170026   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.170034   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:32.170040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:32.170114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:32.202191   77396 cri.go:89] found id: ""
	I0828 18:24:32.202218   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.202229   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:32.202236   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:32.202297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:32.241866   77396 cri.go:89] found id: ""
	I0828 18:24:32.241890   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.241900   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:32.241908   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:32.241980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:32.275919   77396 cri.go:89] found id: ""
	I0828 18:24:32.275949   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.275965   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:32.275972   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:32.276033   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:32.310958   77396 cri.go:89] found id: ""
	I0828 18:24:32.310991   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.311002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:32.311010   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:32.311023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:32.367619   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:32.367665   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:32.380676   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:32.380707   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:32.445626   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:32.445650   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:32.445668   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:32.528458   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:32.528493   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:35.070182   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:35.084599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:35.084707   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:35.120542   77396 cri.go:89] found id: ""
	I0828 18:24:35.120568   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.120578   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:35.120585   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:35.120644   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:35.159336   77396 cri.go:89] found id: ""
	I0828 18:24:35.159361   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.159372   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:35.159378   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:35.159445   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:35.197161   77396 cri.go:89] found id: ""
	I0828 18:24:35.197185   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.197196   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:35.197203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:35.197267   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:35.233507   77396 cri.go:89] found id: ""
	I0828 18:24:35.233533   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.233542   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:35.233548   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:35.233609   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:35.270403   77396 cri.go:89] found id: ""
	I0828 18:24:35.270440   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.270448   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:35.270454   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:35.270503   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:35.304119   77396 cri.go:89] found id: ""
	I0828 18:24:35.304141   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.304149   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:35.304155   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:35.304223   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:35.341477   77396 cri.go:89] found id: ""
	I0828 18:24:35.341507   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.341518   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:35.341525   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:35.341589   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:35.374180   77396 cri.go:89] found id: ""
	I0828 18:24:35.374207   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.374215   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:35.374224   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:35.374235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:35.428008   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:35.428041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:35.443131   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:35.443159   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:35.515296   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:35.515318   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:35.515332   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:35.590734   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:35.590765   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.129856   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:38.143354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:38.143413   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:38.174964   77396 cri.go:89] found id: ""
	I0828 18:24:38.174993   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.175004   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:38.175011   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:38.175083   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:38.211424   77396 cri.go:89] found id: ""
	I0828 18:24:38.211460   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.211471   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:38.211477   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:38.211533   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:38.244667   77396 cri.go:89] found id: ""
	I0828 18:24:38.244697   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.244712   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:38.244719   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:38.244779   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:38.277930   77396 cri.go:89] found id: ""
	I0828 18:24:38.277955   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.277963   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:38.277969   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:38.278020   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:38.311374   77396 cri.go:89] found id: ""
	I0828 18:24:38.311403   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.311413   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:38.311420   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:38.311477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:38.345467   77396 cri.go:89] found id: ""
	I0828 18:24:38.345496   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.345507   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:38.345515   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:38.345576   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:38.377554   77396 cri.go:89] found id: ""
	I0828 18:24:38.377584   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.377595   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:38.377613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:38.377675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:38.410101   77396 cri.go:89] found id: ""
	I0828 18:24:38.410132   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.410142   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:38.410151   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:38.410165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:38.422496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:38.422523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:38.486692   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:38.486715   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:38.486728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:38.567295   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:38.567331   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.605787   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:38.605820   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:41.159454   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:41.172776   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:41.172845   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:41.205430   77396 cri.go:89] found id: ""
	I0828 18:24:41.205459   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.205470   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:41.205477   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:41.205541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:41.238941   77396 cri.go:89] found id: ""
	I0828 18:24:41.238968   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.238978   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:41.238985   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:41.239047   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:41.276056   77396 cri.go:89] found id: ""
	I0828 18:24:41.276079   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.276086   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:41.276092   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:41.276140   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:41.309018   77396 cri.go:89] found id: ""
	I0828 18:24:41.309043   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.309051   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:41.309057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:41.309103   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:41.343279   77396 cri.go:89] found id: ""
	I0828 18:24:41.343301   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.343309   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:41.343314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:41.343360   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:41.376723   77396 cri.go:89] found id: ""
	I0828 18:24:41.376749   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.376756   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:41.376762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:41.376811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:41.411996   77396 cri.go:89] found id: ""
	I0828 18:24:41.412023   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.412034   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:41.412040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:41.412091   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:41.445988   77396 cri.go:89] found id: ""
	I0828 18:24:41.446016   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.446026   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:41.446037   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:41.446053   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:41.498760   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:41.498799   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:41.512383   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:41.512413   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:41.582469   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:41.582493   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:41.582506   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:41.658801   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:41.658836   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.195154   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:44.207904   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:44.207978   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:44.241620   77396 cri.go:89] found id: ""
	I0828 18:24:44.241649   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.241659   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:44.241667   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:44.241726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:44.277206   77396 cri.go:89] found id: ""
	I0828 18:24:44.277238   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.277248   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:44.277254   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:44.277313   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:44.314367   77396 cri.go:89] found id: ""
	I0828 18:24:44.314397   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.314407   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:44.314415   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:44.314473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:44.356384   77396 cri.go:89] found id: ""
	I0828 18:24:44.356417   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.356429   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:44.356436   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:44.356499   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:44.388781   77396 cri.go:89] found id: ""
	I0828 18:24:44.388804   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.388812   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:44.388818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:44.388864   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:44.422896   77396 cri.go:89] found id: ""
	I0828 18:24:44.422927   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.422939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:44.422946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:44.423000   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:44.457218   77396 cri.go:89] found id: ""
	I0828 18:24:44.457242   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.457250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:44.457256   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:44.457315   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:44.489819   77396 cri.go:89] found id: ""
	I0828 18:24:44.489846   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.489854   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:44.489874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:44.489886   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.526759   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:44.526789   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:44.578813   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:44.578844   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:44.592066   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:44.592105   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:44.655504   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:44.655528   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:44.655547   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:47.240915   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:47.253259   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:47.253324   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:47.287911   77396 cri.go:89] found id: ""
	I0828 18:24:47.287939   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.287950   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:47.287958   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:47.288017   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:47.319834   77396 cri.go:89] found id: ""
	I0828 18:24:47.319863   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.319871   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:47.319877   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:47.319947   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:47.356339   77396 cri.go:89] found id: ""
	I0828 18:24:47.356370   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.356395   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:47.356403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:47.356481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:47.388621   77396 cri.go:89] found id: ""
	I0828 18:24:47.388646   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.388656   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:47.388663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:47.388713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:47.422495   77396 cri.go:89] found id: ""
	I0828 18:24:47.422527   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.422537   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:47.422545   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:47.422614   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:47.458799   77396 cri.go:89] found id: ""
	I0828 18:24:47.458825   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.458833   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:47.458839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:47.458885   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:47.496184   77396 cri.go:89] found id: ""
	I0828 18:24:47.496215   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.496226   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:47.496233   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:47.496286   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:47.536283   77396 cri.go:89] found id: ""
	I0828 18:24:47.536311   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.536322   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:47.536333   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:47.536347   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:47.588024   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:47.588056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:47.600661   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:47.600727   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:47.669096   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:47.669124   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:47.669139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:47.753696   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:47.753725   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:50.293600   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:50.306623   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:50.306715   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:50.340416   77396 cri.go:89] found id: ""
	I0828 18:24:50.340448   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.340460   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:50.340468   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:50.340534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:50.375812   77396 cri.go:89] found id: ""
	I0828 18:24:50.375843   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.375854   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:50.375861   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:50.375924   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:50.414399   77396 cri.go:89] found id: ""
	I0828 18:24:50.414426   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.414435   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:50.414444   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:50.414512   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:50.451285   77396 cri.go:89] found id: ""
	I0828 18:24:50.451316   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.451328   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:50.451336   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:50.451404   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:50.487828   77396 cri.go:89] found id: ""
	I0828 18:24:50.487852   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.487863   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:50.487871   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:50.487929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:50.520989   77396 cri.go:89] found id: ""
	I0828 18:24:50.521015   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.521023   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:50.521028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:50.521086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:50.553231   77396 cri.go:89] found id: ""
	I0828 18:24:50.553262   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.553271   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:50.553277   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:50.553332   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:50.588612   77396 cri.go:89] found id: ""
	I0828 18:24:50.588644   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.588654   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:50.588663   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:50.588674   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:50.642018   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:50.642065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:50.655887   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:50.655918   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:50.721935   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:50.721964   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:50.721980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:50.802009   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:50.802049   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:53.344650   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:53.357952   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:53.358011   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:53.393369   77396 cri.go:89] found id: ""
	I0828 18:24:53.393399   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.393408   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:53.393413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:53.393475   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:53.425918   77396 cri.go:89] found id: ""
	I0828 18:24:53.425947   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.425958   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:53.425965   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:53.426018   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:53.461827   77396 cri.go:89] found id: ""
	I0828 18:24:53.461857   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.461867   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:53.461874   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:53.461966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:53.494323   77396 cri.go:89] found id: ""
	I0828 18:24:53.494353   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.494363   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:53.494370   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:53.494430   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:53.531687   77396 cri.go:89] found id: ""
	I0828 18:24:53.531715   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.531726   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:53.531733   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:53.531789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:53.565794   77396 cri.go:89] found id: ""
	I0828 18:24:53.565819   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.565829   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:53.565838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:53.565894   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:53.601666   77396 cri.go:89] found id: ""
	I0828 18:24:53.601699   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.601710   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:53.601717   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:53.601782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:53.641268   77396 cri.go:89] found id: ""
	I0828 18:24:53.641302   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.641315   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:53.641332   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:53.641363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:53.695496   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:53.695532   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:53.708691   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:53.708722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:53.779280   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:53.779307   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:53.779320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:53.859258   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:53.859295   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:56.403005   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:56.416305   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:56.416376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:56.448916   77396 cri.go:89] found id: ""
	I0828 18:24:56.448944   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.448955   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:56.448962   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:56.449022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:56.483870   77396 cri.go:89] found id: ""
	I0828 18:24:56.483897   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.483905   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:56.483910   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:56.483970   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:56.516615   77396 cri.go:89] found id: ""
	I0828 18:24:56.516642   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.516649   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:56.516655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:56.516712   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:56.551561   77396 cri.go:89] found id: ""
	I0828 18:24:56.551584   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.551591   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:56.551599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:56.551668   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:56.586089   77396 cri.go:89] found id: ""
	I0828 18:24:56.586120   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.586130   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:56.586138   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:56.586197   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:56.617988   77396 cri.go:89] found id: ""
	I0828 18:24:56.618018   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.618028   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:56.618034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:56.618111   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:56.664493   77396 cri.go:89] found id: ""
	I0828 18:24:56.664526   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.664535   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:56.664540   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:56.664601   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:56.698191   77396 cri.go:89] found id: ""
	I0828 18:24:56.698217   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.698228   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:56.698237   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:56.698251   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:56.747197   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:56.747225   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:56.760236   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:56.760262   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:56.831931   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:56.831955   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:56.831969   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:56.908578   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:56.908621   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:59.450148   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:59.464476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:59.464548   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:59.500934   77396 cri.go:89] found id: ""
	I0828 18:24:59.500956   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.500965   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:59.500970   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:59.501019   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:59.532711   77396 cri.go:89] found id: ""
	I0828 18:24:59.532740   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.532747   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:59.532753   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:59.532802   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:59.564974   77396 cri.go:89] found id: ""
	I0828 18:24:59.565001   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.565009   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:59.565016   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:59.565073   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:59.597924   77396 cri.go:89] found id: ""
	I0828 18:24:59.597957   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.597967   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:59.597975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:59.598030   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:59.630179   77396 cri.go:89] found id: ""
	I0828 18:24:59.630207   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.630216   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:59.630222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:59.630279   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:59.664755   77396 cri.go:89] found id: ""
	I0828 18:24:59.664783   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.664793   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:59.664800   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:59.664860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:59.701556   77396 cri.go:89] found id: ""
	I0828 18:24:59.701581   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.701590   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:59.701596   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:59.701646   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:59.733387   77396 cri.go:89] found id: ""
	I0828 18:24:59.733422   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.733430   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:59.733439   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:59.733450   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:59.780962   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:59.780994   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:59.795998   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:59.796034   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:59.864864   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:59.864886   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:59.864902   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:59.941914   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:59.941957   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:02.480133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:02.492804   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:02.492863   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:02.525573   77396 cri.go:89] found id: ""
	I0828 18:25:02.525600   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.525609   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:02.525614   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:02.525675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:02.558640   77396 cri.go:89] found id: ""
	I0828 18:25:02.558670   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.558680   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:02.558687   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:02.558746   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:02.598803   77396 cri.go:89] found id: ""
	I0828 18:25:02.598838   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.598851   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:02.598860   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:02.598931   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:02.634067   77396 cri.go:89] found id: ""
	I0828 18:25:02.634110   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.634121   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:02.634128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:02.634188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:02.671495   77396 cri.go:89] found id: ""
	I0828 18:25:02.671520   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.671529   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:02.671536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:02.671595   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:02.704478   77396 cri.go:89] found id: ""
	I0828 18:25:02.704510   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.704522   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:02.704530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:02.704591   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:02.736799   77396 cri.go:89] found id: ""
	I0828 18:25:02.736831   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.736840   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:02.736846   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:02.736905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:02.770820   77396 cri.go:89] found id: ""
	I0828 18:25:02.770846   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.770856   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:02.770866   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:02.770885   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:02.848618   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:02.848645   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:02.848662   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:02.924704   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:02.924738   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:02.960776   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:02.960811   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:03.011600   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:03.011645   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:05.527662   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:05.540652   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:05.540737   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:05.574620   77396 cri.go:89] found id: ""
	I0828 18:25:05.574650   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.574660   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:05.574668   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:05.574729   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:05.607594   77396 cri.go:89] found id: ""
	I0828 18:25:05.607621   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.607629   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:05.607634   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:05.607691   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:05.650792   77396 cri.go:89] found id: ""
	I0828 18:25:05.650823   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.650833   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:05.650841   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:05.650909   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:05.684453   77396 cri.go:89] found id: ""
	I0828 18:25:05.684481   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.684492   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:05.684499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:05.684564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:05.717875   77396 cri.go:89] found id: ""
	I0828 18:25:05.717904   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.717914   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:05.717921   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:05.717980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:05.754114   77396 cri.go:89] found id: ""
	I0828 18:25:05.754143   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.754155   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:05.754163   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:05.754220   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:05.786354   77396 cri.go:89] found id: ""
	I0828 18:25:05.786399   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.786411   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:05.786418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:05.786473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:05.818108   77396 cri.go:89] found id: ""
	I0828 18:25:05.818134   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.818141   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:05.818149   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:05.818164   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:05.868731   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:05.868762   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:05.882333   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:05.882360   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:05.951978   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:05.952003   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:05.952015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:06.028537   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:06.028573   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:08.567011   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:08.580607   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:08.580675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:08.613821   77396 cri.go:89] found id: ""
	I0828 18:25:08.613847   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.613858   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:08.613865   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:08.613929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:08.648994   77396 cri.go:89] found id: ""
	I0828 18:25:08.649021   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.649030   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:08.649036   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:08.649084   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:08.680804   77396 cri.go:89] found id: ""
	I0828 18:25:08.680829   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.680837   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:08.680844   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:08.680903   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:08.717926   77396 cri.go:89] found id: ""
	I0828 18:25:08.717962   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.717973   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:08.717980   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:08.718043   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:08.751928   77396 cri.go:89] found id: ""
	I0828 18:25:08.751957   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.751967   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:08.751975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:08.752037   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:08.791400   77396 cri.go:89] found id: ""
	I0828 18:25:08.791423   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.791432   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:08.791437   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:08.791497   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:08.828072   77396 cri.go:89] found id: ""
	I0828 18:25:08.828106   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.828118   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:08.828125   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:08.828190   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:08.881175   77396 cri.go:89] found id: ""
	I0828 18:25:08.881204   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.881216   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:08.881226   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:08.881241   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:08.970432   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:08.970469   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:09.006975   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:09.007002   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:09.059881   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:09.059919   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:09.073543   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:09.073567   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:09.143468   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:11.644356   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:11.657229   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:11.657297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:11.695036   77396 cri.go:89] found id: ""
	I0828 18:25:11.695059   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.695067   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:11.695073   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:11.695123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:11.726524   77396 cri.go:89] found id: ""
	I0828 18:25:11.726548   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.726556   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:11.726561   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:11.726608   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:11.759249   77396 cri.go:89] found id: ""
	I0828 18:25:11.759278   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.759289   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:11.759296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:11.759356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:11.794109   77396 cri.go:89] found id: ""
	I0828 18:25:11.794154   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.794163   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:11.794169   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:11.794221   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:11.828378   77396 cri.go:89] found id: ""
	I0828 18:25:11.828403   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.828411   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:11.828416   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:11.828470   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:11.864009   77396 cri.go:89] found id: ""
	I0828 18:25:11.864035   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.864043   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:11.864049   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:11.864108   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:11.895844   77396 cri.go:89] found id: ""
	I0828 18:25:11.895870   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.895878   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:11.895883   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:11.895932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:11.932149   77396 cri.go:89] found id: ""
	I0828 18:25:11.932180   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.932190   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:11.932208   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:11.932222   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:11.982478   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:11.982514   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:11.995466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:11.995498   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:12.058507   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:12.058531   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:12.058546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:12.138225   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:12.138260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:14.675970   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:14.688744   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:14.688811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:14.720771   77396 cri.go:89] found id: ""
	I0828 18:25:14.720795   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.720803   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:14.720808   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:14.720855   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:14.754047   77396 cri.go:89] found id: ""
	I0828 18:25:14.754071   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.754095   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:14.754103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:14.754159   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:14.789214   77396 cri.go:89] found id: ""
	I0828 18:25:14.789244   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.789256   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:14.789263   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:14.789331   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:14.822366   77396 cri.go:89] found id: ""
	I0828 18:25:14.822399   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.822411   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:14.822419   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:14.822489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:14.855905   77396 cri.go:89] found id: ""
	I0828 18:25:14.855932   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.855942   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:14.855949   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:14.856007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:14.889492   77396 cri.go:89] found id: ""
	I0828 18:25:14.889519   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.889529   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:14.889536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:14.889594   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:14.923892   77396 cri.go:89] found id: ""
	I0828 18:25:14.923921   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.923932   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:14.923940   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:14.923998   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:14.954979   77396 cri.go:89] found id: ""
	I0828 18:25:14.955002   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.955009   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:14.955017   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:14.955029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:15.006233   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:15.006266   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:15.019702   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:15.019729   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:15.090916   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:15.090943   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:15.090959   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:15.166150   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:15.166190   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:17.703473   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:17.716353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:17.716440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:17.750334   77396 cri.go:89] found id: ""
	I0828 18:25:17.750367   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.750376   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:17.750382   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:17.750440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:17.783429   77396 cri.go:89] found id: ""
	I0828 18:25:17.783475   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.783488   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:17.783496   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:17.783561   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:17.819014   77396 cri.go:89] found id: ""
	I0828 18:25:17.819041   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.819052   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:17.819060   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:17.819118   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:17.856138   77396 cri.go:89] found id: ""
	I0828 18:25:17.856168   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.856179   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:17.856186   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:17.856248   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:17.891579   77396 cri.go:89] found id: ""
	I0828 18:25:17.891611   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.891619   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:17.891626   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:17.891687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:17.924709   77396 cri.go:89] found id: ""
	I0828 18:25:17.924771   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.924798   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:17.924808   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:17.924874   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:17.955875   77396 cri.go:89] found id: ""
	I0828 18:25:17.955903   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.955913   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:17.955920   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:17.955977   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:17.993827   77396 cri.go:89] found id: ""
	I0828 18:25:17.993861   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.993872   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:17.993882   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:17.993897   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:18.046501   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:18.046534   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:18.060008   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:18.060040   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:18.128546   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:18.128567   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:18.128582   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:18.204859   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:18.204896   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:20.745360   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:20.759428   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:20.759511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:20.794748   77396 cri.go:89] found id: ""
	I0828 18:25:20.794780   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.794789   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:20.794794   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:20.794843   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:20.834595   77396 cri.go:89] found id: ""
	I0828 18:25:20.834623   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.834636   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:20.834642   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:20.834720   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:20.870609   77396 cri.go:89] found id: ""
	I0828 18:25:20.870636   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.870646   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:20.870653   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:20.870710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:20.903739   77396 cri.go:89] found id: ""
	I0828 18:25:20.903764   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.903774   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:20.903782   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:20.903841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:20.937331   77396 cri.go:89] found id: ""
	I0828 18:25:20.937360   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.937367   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:20.937373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:20.937424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:20.971140   77396 cri.go:89] found id: ""
	I0828 18:25:20.971169   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.971178   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:20.971184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:20.971231   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:21.002714   77396 cri.go:89] found id: ""
	I0828 18:25:21.002743   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.002753   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:21.002761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:21.002833   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:21.034802   77396 cri.go:89] found id: ""
	I0828 18:25:21.034827   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.034837   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:21.034848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:21.034862   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:21.091088   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:21.091128   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:21.103535   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:21.103569   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:21.177175   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:21.177202   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:21.177217   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:21.257125   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:21.257161   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:23.797074   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:23.810097   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:23.810171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:23.843943   77396 cri.go:89] found id: ""
	I0828 18:25:23.843972   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.843984   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:23.843991   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:23.844054   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:23.879872   77396 cri.go:89] found id: ""
	I0828 18:25:23.879906   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.879918   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:23.879926   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:23.879985   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:23.914109   77396 cri.go:89] found id: ""
	I0828 18:25:23.914136   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.914145   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:23.914153   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:23.914200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:23.952672   77396 cri.go:89] found id: ""
	I0828 18:25:23.952700   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.952708   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:23.952714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:23.952759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:23.986813   77396 cri.go:89] found id: ""
	I0828 18:25:23.986839   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.986855   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:23.986861   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:23.986917   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:24.019358   77396 cri.go:89] found id: ""
	I0828 18:25:24.019387   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.019396   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:24.019413   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:24.019487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:24.053389   77396 cri.go:89] found id: ""
	I0828 18:25:24.053415   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.053423   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:24.053429   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:24.053477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:24.086618   77396 cri.go:89] found id: ""
	I0828 18:25:24.086652   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.086660   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:24.086667   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:24.086677   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:24.136243   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:24.136277   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:24.150031   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:24.150071   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:24.229689   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:24.229729   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:24.229746   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:24.307152   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:24.307197   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:26.844828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:26.858915   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:26.858989   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:26.896094   77396 cri.go:89] found id: ""
	I0828 18:25:26.896123   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.896132   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:26.896138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:26.896187   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:26.934896   77396 cri.go:89] found id: ""
	I0828 18:25:26.934925   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.934936   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:26.934944   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:26.935007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:26.967673   77396 cri.go:89] found id: ""
	I0828 18:25:26.967700   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.967708   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:26.967714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:26.967780   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:27.000095   77396 cri.go:89] found id: ""
	I0828 18:25:27.000124   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.000133   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:27.000140   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:27.000192   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:27.038158   77396 cri.go:89] found id: ""
	I0828 18:25:27.038186   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.038195   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:27.038201   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:27.038253   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:27.073606   77396 cri.go:89] found id: ""
	I0828 18:25:27.073634   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.073649   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:27.073657   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:27.073713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:27.105139   77396 cri.go:89] found id: ""
	I0828 18:25:27.105163   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.105176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:27.105182   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:27.105235   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:27.137985   77396 cri.go:89] found id: ""
	I0828 18:25:27.138014   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.138025   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:27.138036   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:27.138055   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:27.187983   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:27.188018   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:27.200260   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:27.200286   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:27.273005   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:27.273026   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:27.273038   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:27.353333   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:27.353375   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:29.890515   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:29.903924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:29.903994   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:29.936189   77396 cri.go:89] found id: ""
	I0828 18:25:29.936221   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.936231   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:29.936240   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:29.936354   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:29.968319   77396 cri.go:89] found id: ""
	I0828 18:25:29.968349   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.968359   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:29.968366   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:29.968436   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:30.001331   77396 cri.go:89] found id: ""
	I0828 18:25:30.001358   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.001383   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:30.001391   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:30.001477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:30.035610   77396 cri.go:89] found id: ""
	I0828 18:25:30.035634   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.035642   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:30.035648   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:30.035695   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:30.067304   77396 cri.go:89] found id: ""
	I0828 18:25:30.067335   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.067346   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:30.067354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:30.067429   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:30.105020   77396 cri.go:89] found id: ""
	I0828 18:25:30.105049   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.105057   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:30.105063   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:30.105126   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:30.142048   77396 cri.go:89] found id: ""
	I0828 18:25:30.142097   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.142110   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:30.142117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:30.142180   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:30.173099   77396 cri.go:89] found id: ""
	I0828 18:25:30.173131   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.173140   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:30.173149   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:30.173166   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:30.238946   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:30.238968   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:30.238980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:30.320484   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:30.320523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:30.360028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:30.360056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:30.412663   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:30.412697   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:32.927100   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:32.940555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:32.940636   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:32.973182   77396 cri.go:89] found id: ""
	I0828 18:25:32.973221   77396 logs.go:276] 0 containers: []
	W0828 18:25:32.973233   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:32.973242   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:32.973303   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:33.006096   77396 cri.go:89] found id: ""
	I0828 18:25:33.006125   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.006134   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:33.006139   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:33.006191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:33.038430   77396 cri.go:89] found id: ""
	I0828 18:25:33.038461   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.038472   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:33.038480   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:33.038542   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:33.070266   77396 cri.go:89] found id: ""
	I0828 18:25:33.070294   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.070303   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:33.070315   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:33.070375   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:33.105248   77396 cri.go:89] found id: ""
	I0828 18:25:33.105278   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.105289   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:33.105296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:33.105356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:33.136507   77396 cri.go:89] found id: ""
	I0828 18:25:33.136540   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.136551   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:33.136559   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:33.136618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:33.167333   77396 cri.go:89] found id: ""
	I0828 18:25:33.167359   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.167370   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:33.167377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:33.167442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:33.201302   77396 cri.go:89] found id: ""
	I0828 18:25:33.201331   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.201343   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:33.201352   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:33.201364   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:33.213335   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:33.213361   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:33.278269   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:33.278296   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:33.278310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:33.357015   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:33.357048   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:33.401463   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:33.401495   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:35.952911   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:35.965925   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:35.965990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:36.001656   77396 cri.go:89] found id: ""
	I0828 18:25:36.001693   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.001705   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:36.001713   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:36.001784   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:36.035010   77396 cri.go:89] found id: ""
	I0828 18:25:36.035037   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.035045   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:36.035050   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:36.035099   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:36.069113   77396 cri.go:89] found id: ""
	I0828 18:25:36.069148   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.069158   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:36.069164   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:36.069219   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:36.106200   77396 cri.go:89] found id: ""
	I0828 18:25:36.106230   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.106240   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:36.106248   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:36.106316   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:36.138428   77396 cri.go:89] found id: ""
	I0828 18:25:36.138457   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.138468   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:36.138475   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:36.138559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:36.170084   77396 cri.go:89] found id: ""
	I0828 18:25:36.170112   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.170122   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:36.170128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:36.170188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:36.202180   77396 cri.go:89] found id: ""
	I0828 18:25:36.202205   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.202215   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:36.202222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:36.202285   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:36.236125   77396 cri.go:89] found id: ""
	I0828 18:25:36.236156   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.236167   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:36.236179   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:36.236193   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:36.274230   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:36.274256   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:36.325505   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:36.325546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:36.338714   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:36.338741   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:36.406404   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:36.406432   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:36.406448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:38.981942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:38.995287   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:38.995357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:39.028250   77396 cri.go:89] found id: ""
	I0828 18:25:39.028275   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.028282   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:39.028289   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:39.028335   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:39.061402   77396 cri.go:89] found id: ""
	I0828 18:25:39.061434   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.061444   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:39.061449   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:39.061501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:39.095672   77396 cri.go:89] found id: ""
	I0828 18:25:39.095704   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.095716   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:39.095729   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:39.095789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:39.130135   77396 cri.go:89] found id: ""
	I0828 18:25:39.130162   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.130170   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:39.130176   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:39.130239   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:39.168529   77396 cri.go:89] found id: ""
	I0828 18:25:39.168560   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.168571   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:39.168578   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:39.168641   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:39.200786   77396 cri.go:89] found id: ""
	I0828 18:25:39.200813   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.200821   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:39.200828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:39.200876   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:39.232855   77396 cri.go:89] found id: ""
	I0828 18:25:39.232886   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.232894   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:39.232902   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:39.232966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:39.267241   77396 cri.go:89] found id: ""
	I0828 18:25:39.267273   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.267284   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:39.267294   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:39.267309   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:39.306023   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:39.306061   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:39.357880   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:39.357931   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:39.370886   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:39.370914   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:39.448130   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:39.448151   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:39.448163   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:42.027111   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:42.039611   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:42.039687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:42.078052   77396 cri.go:89] found id: ""
	I0828 18:25:42.078093   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.078104   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:42.078111   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:42.078169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:42.112812   77396 cri.go:89] found id: ""
	I0828 18:25:42.112842   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.112851   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:42.112856   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:42.112902   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:42.146846   77396 cri.go:89] found id: ""
	I0828 18:25:42.146875   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.146884   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:42.146891   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:42.146948   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:42.179311   77396 cri.go:89] found id: ""
	I0828 18:25:42.179344   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.179352   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:42.179358   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:42.179422   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:42.212149   77396 cri.go:89] found id: ""
	I0828 18:25:42.212179   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.212192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:42.212200   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:42.212254   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:42.248322   77396 cri.go:89] found id: ""
	I0828 18:25:42.248358   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.248369   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:42.248382   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:42.248496   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:42.283212   77396 cri.go:89] found id: ""
	I0828 18:25:42.283241   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.283250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:42.283257   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:42.283318   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:42.327064   77396 cri.go:89] found id: ""
	I0828 18:25:42.327099   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.327110   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:42.327121   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:42.327135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:42.378545   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:42.378577   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:42.392020   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:42.392045   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:42.464531   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:42.464553   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:42.464564   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:42.543116   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:42.543162   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:45.083935   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:45.096434   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:45.096501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:45.130059   77396 cri.go:89] found id: ""
	I0828 18:25:45.130098   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.130110   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:45.130117   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:45.130176   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:45.160982   77396 cri.go:89] found id: ""
	I0828 18:25:45.161011   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.161021   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:45.161028   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:45.161086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:45.191416   77396 cri.go:89] found id: ""
	I0828 18:25:45.191449   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.191460   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:45.191467   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:45.191524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:45.223315   77396 cri.go:89] found id: ""
	I0828 18:25:45.223344   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.223360   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:45.223368   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:45.223421   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:45.255404   77396 cri.go:89] found id: ""
	I0828 18:25:45.255428   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.255435   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:45.255441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:45.255487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:45.294671   77396 cri.go:89] found id: ""
	I0828 18:25:45.294705   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.294716   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:45.294724   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:45.294811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:45.329148   77396 cri.go:89] found id: ""
	I0828 18:25:45.329174   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.329186   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:45.329191   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:45.329249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:45.361976   77396 cri.go:89] found id: ""
	I0828 18:25:45.362007   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.362018   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:45.362028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:45.362041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:45.412495   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:45.412530   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:45.425268   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:45.425302   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:45.493451   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:45.493475   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:45.493489   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:45.571427   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:45.571472   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.108133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:48.120632   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:48.120699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:48.156933   77396 cri.go:89] found id: ""
	I0828 18:25:48.156963   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.156973   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:48.156981   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:48.157045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:48.188436   77396 cri.go:89] found id: ""
	I0828 18:25:48.188465   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.188473   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:48.188479   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:48.188524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:48.219558   77396 cri.go:89] found id: ""
	I0828 18:25:48.219588   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.219598   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:48.219605   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:48.219661   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:48.252872   77396 cri.go:89] found id: ""
	I0828 18:25:48.252901   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.252917   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:48.252923   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:48.252975   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:48.288244   77396 cri.go:89] found id: ""
	I0828 18:25:48.288273   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.288283   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:48.288291   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:48.288355   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:48.325077   77396 cri.go:89] found id: ""
	I0828 18:25:48.325114   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.325126   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:48.325134   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:48.325195   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:48.358163   77396 cri.go:89] found id: ""
	I0828 18:25:48.358191   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.358202   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:48.358210   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:48.358259   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:48.409246   77396 cri.go:89] found id: ""
	I0828 18:25:48.409277   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.409287   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:48.409299   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:48.409314   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:48.425228   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:48.425259   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:48.493169   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:48.493188   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:48.493201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:48.573486   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:48.573524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.615846   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:48.615879   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:51.165546   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:51.178743   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:51.178807   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:51.214299   77396 cri.go:89] found id: ""
	I0828 18:25:51.214329   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.214340   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:51.214349   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:51.214426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:51.247057   77396 cri.go:89] found id: ""
	I0828 18:25:51.247086   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.247096   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:51.247103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:51.247174   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:51.279381   77396 cri.go:89] found id: ""
	I0828 18:25:51.279413   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.279423   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:51.279430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:51.279492   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:51.314237   77396 cri.go:89] found id: ""
	I0828 18:25:51.314266   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.314277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:51.314286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:51.314352   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:51.347496   77396 cri.go:89] found id: ""
	I0828 18:25:51.347518   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.347526   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:51.347532   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:51.347578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:51.381705   77396 cri.go:89] found id: ""
	I0828 18:25:51.381742   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.381753   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:51.381762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:51.381816   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:51.413157   77396 cri.go:89] found id: ""
	I0828 18:25:51.413186   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.413196   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:51.413203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:51.413261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:51.443228   77396 cri.go:89] found id: ""
	I0828 18:25:51.443251   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.443266   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:51.443274   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:51.443287   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:51.490927   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:51.490961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:51.505308   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:51.505334   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:51.572077   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:51.572109   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:51.572125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:51.658398   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:51.658441   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:54.199638   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:54.213449   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:54.213525   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:54.249698   77396 cri.go:89] found id: ""
	I0828 18:25:54.249720   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.249727   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:54.249733   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:54.249782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:54.285235   77396 cri.go:89] found id: ""
	I0828 18:25:54.285267   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.285279   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:54.285287   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:54.285344   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:54.322535   77396 cri.go:89] found id: ""
	I0828 18:25:54.322562   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.322571   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:54.322577   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:54.322640   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:54.357995   77396 cri.go:89] found id: ""
	I0828 18:25:54.358025   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.358036   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:54.358045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:54.358129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:54.391112   77396 cri.go:89] found id: ""
	I0828 18:25:54.391137   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.391145   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:54.391150   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:54.391213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:54.424248   77396 cri.go:89] found id: ""
	I0828 18:25:54.424278   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.424288   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:54.424295   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:54.424357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:54.456529   77396 cri.go:89] found id: ""
	I0828 18:25:54.456553   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.456561   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:54.456566   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:54.456619   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:54.489226   77396 cri.go:89] found id: ""
	I0828 18:25:54.489251   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.489259   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:54.489268   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:54.489283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:54.544282   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:54.544318   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:54.557511   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:54.557549   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:54.631057   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:54.631081   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:54.631096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:54.711874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:54.711910   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:57.251826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:57.264806   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:57.264872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:57.300005   77396 cri.go:89] found id: ""
	I0828 18:25:57.300031   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.300041   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:57.300049   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:57.300128   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:57.333070   77396 cri.go:89] found id: ""
	I0828 18:25:57.333099   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.333110   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:57.333117   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:57.333181   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:57.369343   77396 cri.go:89] found id: ""
	I0828 18:25:57.369372   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.369390   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:57.369398   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:57.369462   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:57.401729   77396 cri.go:89] found id: ""
	I0828 18:25:57.401756   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.401764   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:57.401770   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:57.401824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:57.432890   77396 cri.go:89] found id: ""
	I0828 18:25:57.432914   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.432921   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:57.432927   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:57.432973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:57.467572   77396 cri.go:89] found id: ""
	I0828 18:25:57.467596   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.467604   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:57.467609   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:57.467663   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:57.500316   77396 cri.go:89] found id: ""
	I0828 18:25:57.500344   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.500351   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:57.500357   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:57.500411   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:57.531676   77396 cri.go:89] found id: ""
	I0828 18:25:57.531700   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.531708   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:57.531716   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:57.531728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:57.604613   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:57.604639   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:57.604653   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:57.684622   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:57.684658   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:57.720566   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:57.720656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:57.770832   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:57.770866   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:00.283493   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:00.296500   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:00.296578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:00.334395   77396 cri.go:89] found id: ""
	I0828 18:26:00.334420   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.334428   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:00.334434   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:00.334481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:00.369178   77396 cri.go:89] found id: ""
	I0828 18:26:00.369205   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.369214   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:00.369219   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:00.369283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:00.405962   77396 cri.go:89] found id: ""
	I0828 18:26:00.405990   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.406000   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:00.406007   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:00.406064   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:00.438684   77396 cri.go:89] found id: ""
	I0828 18:26:00.438717   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.438728   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:00.438735   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:00.438795   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:00.472357   77396 cri.go:89] found id: ""
	I0828 18:26:00.472385   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.472397   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:00.472403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:00.472450   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:00.506891   77396 cri.go:89] found id: ""
	I0828 18:26:00.506920   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.506931   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:00.506938   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:00.506999   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:00.546387   77396 cri.go:89] found id: ""
	I0828 18:26:00.546413   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.546422   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:00.546427   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:00.546474   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:00.598714   77396 cri.go:89] found id: ""
	I0828 18:26:00.598745   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.598753   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:00.598761   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:00.598779   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:00.617100   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:00.617130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:00.687317   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:00.687348   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:00.687363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:00.770097   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:00.770130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:00.815848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:00.815883   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:03.365469   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:03.379117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:03.379182   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:03.414122   77396 cri.go:89] found id: ""
	I0828 18:26:03.414148   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.414155   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:03.414161   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:03.414208   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:03.446953   77396 cri.go:89] found id: ""
	I0828 18:26:03.446975   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.446983   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:03.446988   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:03.447036   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:03.481034   77396 cri.go:89] found id: ""
	I0828 18:26:03.481059   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.481067   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:03.481072   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:03.481120   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:03.514785   77396 cri.go:89] found id: ""
	I0828 18:26:03.514814   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.514824   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:03.514832   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:03.514888   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:03.548302   77396 cri.go:89] found id: ""
	I0828 18:26:03.548330   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.548340   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:03.548348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:03.548423   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:03.582430   77396 cri.go:89] found id: ""
	I0828 18:26:03.582460   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.582469   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:03.582476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:03.582529   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:03.615108   77396 cri.go:89] found id: ""
	I0828 18:26:03.615136   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.615144   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:03.615149   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:03.615205   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:03.647282   77396 cri.go:89] found id: ""
	I0828 18:26:03.647312   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.647321   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:03.647330   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:03.647340   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:03.660466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:03.660500   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:03.732746   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:03.732767   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:03.732780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:03.811286   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:03.811320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:03.848482   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:03.848513   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:06.400122   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:06.412839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:06.412908   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:06.448570   77396 cri.go:89] found id: ""
	I0828 18:26:06.448597   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.448608   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:06.448620   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:06.448687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:06.482446   77396 cri.go:89] found id: ""
	I0828 18:26:06.482476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.482487   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:06.482495   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:06.482555   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:06.514640   77396 cri.go:89] found id: ""
	I0828 18:26:06.514669   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.514679   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:06.514686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:06.514747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:06.548997   77396 cri.go:89] found id: ""
	I0828 18:26:06.549020   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.549028   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:06.549034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:06.549079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:06.583557   77396 cri.go:89] found id: ""
	I0828 18:26:06.583582   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.583589   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:06.583595   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:06.583665   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:06.617447   77396 cri.go:89] found id: ""
	I0828 18:26:06.617476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.617484   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:06.617490   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:06.617549   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:06.650387   77396 cri.go:89] found id: ""
	I0828 18:26:06.650419   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.650427   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:06.650433   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:06.650489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:06.682851   77396 cri.go:89] found id: ""
	I0828 18:26:06.682879   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.682888   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:06.682899   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:06.682961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:06.695365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:06.695392   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:06.760214   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:06.760245   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:06.760261   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:06.839827   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:06.839863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:06.877298   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:06.877325   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.430694   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:09.443043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:09.443115   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:09.476557   77396 cri.go:89] found id: ""
	I0828 18:26:09.476583   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.476594   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:09.476602   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:09.476659   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:09.514909   77396 cri.go:89] found id: ""
	I0828 18:26:09.514935   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.514943   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:09.514948   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:09.515009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:09.549769   77396 cri.go:89] found id: ""
	I0828 18:26:09.549800   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.549810   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:09.549818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:09.549868   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:09.582793   77396 cri.go:89] found id: ""
	I0828 18:26:09.582821   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.582831   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:09.582838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:09.582896   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:09.615603   77396 cri.go:89] found id: ""
	I0828 18:26:09.615636   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.615648   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:09.615655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:09.615716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:09.650046   77396 cri.go:89] found id: ""
	I0828 18:26:09.650087   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.650100   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:09.650108   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:09.650161   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:09.681726   77396 cri.go:89] found id: ""
	I0828 18:26:09.681754   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.681763   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:09.681768   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:09.681821   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:09.713008   77396 cri.go:89] found id: ""
	I0828 18:26:09.713036   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.713045   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:09.713054   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:09.713065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:09.792720   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:09.792757   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:09.831752   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:09.831785   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.880877   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:09.880913   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:09.896178   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:09.896215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:09.962282   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:12.462957   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:12.475266   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:12.475345   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:12.508364   77396 cri.go:89] found id: ""
	I0828 18:26:12.508394   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.508405   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:12.508413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:12.508472   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:12.544152   77396 cri.go:89] found id: ""
	I0828 18:26:12.544185   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.544197   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:12.544204   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:12.544264   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:12.578358   77396 cri.go:89] found id: ""
	I0828 18:26:12.578384   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.578394   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:12.578403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:12.578456   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:12.609183   77396 cri.go:89] found id: ""
	I0828 18:26:12.609206   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.609214   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:12.609219   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:12.609292   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:12.641791   77396 cri.go:89] found id: ""
	I0828 18:26:12.641816   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.641824   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:12.641830   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:12.641887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:12.673857   77396 cri.go:89] found id: ""
	I0828 18:26:12.673881   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.673889   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:12.673894   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:12.673938   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:12.709501   77396 cri.go:89] found id: ""
	I0828 18:26:12.709525   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.709532   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:12.709538   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:12.709585   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:12.742972   77396 cri.go:89] found id: ""
	I0828 18:26:12.742994   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.743002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:12.743010   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:12.743026   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:12.813949   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:12.813969   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:12.813980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:12.894829   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:12.894873   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:12.939533   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:12.939565   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:12.990319   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:12.990358   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:15.503923   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:15.518161   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:15.518240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:15.564145   77396 cri.go:89] found id: ""
	I0828 18:26:15.564173   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.564182   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:15.564189   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:15.564249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:15.600654   77396 cri.go:89] found id: ""
	I0828 18:26:15.600682   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.600692   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:15.600699   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:15.600760   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:15.633089   77396 cri.go:89] found id: ""
	I0828 18:26:15.633122   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.633131   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:15.633137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:15.633186   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:15.667339   77396 cri.go:89] found id: ""
	I0828 18:26:15.667370   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.667382   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:15.667389   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:15.667451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:15.699463   77396 cri.go:89] found id: ""
	I0828 18:26:15.699499   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.699508   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:15.699513   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:15.699573   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:15.735841   77396 cri.go:89] found id: ""
	I0828 18:26:15.735866   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.735873   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:15.735879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:15.735929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:15.771111   77396 cri.go:89] found id: ""
	I0828 18:26:15.771135   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.771142   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:15.771148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:15.771198   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:15.804845   77396 cri.go:89] found id: ""
	I0828 18:26:15.804868   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.804875   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:15.804884   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:15.804894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:15.856744   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:15.856780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:15.869496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:15.869520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:15.938957   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:15.938982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:15.938998   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:16.016482   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:16.016525   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:18.554851   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:18.568241   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.568317   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.601401   77396 cri.go:89] found id: ""
	I0828 18:26:18.601439   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.601448   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:18.601454   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.601511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.634784   77396 cri.go:89] found id: ""
	I0828 18:26:18.634809   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.634816   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:18.634822   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.634875   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:18.666540   77396 cri.go:89] found id: ""
	I0828 18:26:18.666572   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.666584   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:18.666591   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:18.666643   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:18.699180   77396 cri.go:89] found id: ""
	I0828 18:26:18.699210   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.699221   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:18.699228   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:18.699289   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:18.735001   77396 cri.go:89] found id: ""
	I0828 18:26:18.735032   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.735042   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:18.735050   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:18.735116   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:18.767404   77396 cri.go:89] found id: ""
	I0828 18:26:18.767441   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.767454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:18.767472   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:18.767537   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:18.798857   77396 cri.go:89] found id: ""
	I0828 18:26:18.798881   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.798890   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:18.798896   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:18.798942   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:18.830113   77396 cri.go:89] found id: ""
	I0828 18:26:18.830137   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.830145   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:18.830153   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:18.830165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:18.843161   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:18.843188   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:18.910736   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:18.910760   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:18.910775   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:18.991698   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:18.991734   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.038896   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.038929   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:21.590663   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:21.602796   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:21.602860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:21.635583   77396 cri.go:89] found id: ""
	I0828 18:26:21.635612   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.635623   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:21.635631   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:21.635699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:21.666982   77396 cri.go:89] found id: ""
	I0828 18:26:21.667023   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.667034   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:21.667041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:21.667098   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:21.698817   77396 cri.go:89] found id: ""
	I0828 18:26:21.698851   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.698862   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:21.698870   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:21.698925   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:21.729618   77396 cri.go:89] found id: ""
	I0828 18:26:21.729645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.729654   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:21.729660   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:21.729718   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:21.763188   77396 cri.go:89] found id: ""
	I0828 18:26:21.763214   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.763222   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:21.763227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:21.763272   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:21.795613   77396 cri.go:89] found id: ""
	I0828 18:26:21.795645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.795656   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:21.795663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:21.795716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:21.828271   77396 cri.go:89] found id: ""
	I0828 18:26:21.828299   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.828308   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:21.828314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:21.828358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:21.860098   77396 cri.go:89] found id: ""
	I0828 18:26:21.860124   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.860132   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:21.860141   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:21.860155   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:21.908269   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:21.908308   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:21.921123   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:21.921149   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:21.985059   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:21.985078   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:21.985091   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:22.065705   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:22.065745   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:24.608061   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:24.621768   77396 kubeadm.go:597] duration metric: took 4m4.233964466s to restartPrimaryControlPlane
	W0828 18:26:24.621838   77396 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:24.621863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:28.691092   77396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.069202982s)
	I0828 18:26:28.691158   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:28.705352   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:28.715421   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:28.724698   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:28.724718   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:28.724771   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.733594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.733676   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.742759   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.752127   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.752187   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.761279   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.770451   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.770518   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.779635   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.788337   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.788405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.797794   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.997476   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:28:25.556329   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:28:25.556449   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:28:25.558031   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:28:25.558117   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:28:25.558222   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:28:25.558363   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:28:25.558517   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:28:25.558594   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:28:25.561046   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:28:25.561124   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:28:25.561179   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:28:25.561288   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:28:25.561384   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:28:25.561489   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:28:25.561562   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:28:25.561797   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:28:25.561914   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:28:25.562010   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:28:25.562230   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:28:25.562294   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:28:25.562402   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:28:25.562478   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:28:25.562554   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:28:25.562706   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:28:25.562818   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:28:25.562926   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:28:25.563006   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:28:25.563043   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:28:25.563144   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:28:25.564527   77396 out.go:235]   - Booting up control plane ...
	I0828 18:28:25.564629   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:28:25.564716   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:28:25.564816   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:28:25.564929   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:28:25.565154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:28:25.565226   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:28:25.565326   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565541   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.565660   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565895   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566002   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566184   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566245   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566411   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566473   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566629   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566636   77396 kubeadm.go:310] 
	I0828 18:28:25.566672   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:28:25.566706   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:28:25.566712   77396 kubeadm.go:310] 
	I0828 18:28:25.566740   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:28:25.566769   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:28:25.566881   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:28:25.566893   77396 kubeadm.go:310] 
	I0828 18:28:25.567033   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:28:25.567080   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:28:25.567126   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:28:25.567142   77396 kubeadm.go:310] 
	I0828 18:28:25.567276   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:28:25.567351   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:28:25.567358   77396 kubeadm.go:310] 
	I0828 18:28:25.567461   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:28:25.567534   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:28:25.567612   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:28:25.567689   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:28:25.567726   77396 kubeadm.go:310] 
	W0828 18:28:25.567820   77396 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0828 18:28:25.567858   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:28:26.036779   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:28:26.051771   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:28:26.060912   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:28:26.060932   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:28:26.060971   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:28:26.069420   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:28:26.069486   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:28:26.078268   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:28:26.086594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:28:26.086669   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:28:26.095756   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.104747   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:28:26.104809   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.113847   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:28:26.122600   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:28:26.122673   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:28:26.131697   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:28:26.338828   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:30:22.315132   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:30:22.315271   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:30:22.316887   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:30:22.316970   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:30:22.317067   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:30:22.317199   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:30:22.317289   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:30:22.317340   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:30:22.319318   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:30:22.319406   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:30:22.319461   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:30:22.319540   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:30:22.319620   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:30:22.319715   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:30:22.319791   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:30:22.319888   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:30:22.319972   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:30:22.320068   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:30:22.320161   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:30:22.320232   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:30:22.320312   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:30:22.320362   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:30:22.320411   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:30:22.320468   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:30:22.320511   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:30:22.320627   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:30:22.320748   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:30:22.320805   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:30:22.320922   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:30:22.322522   77396 out.go:235]   - Booting up control plane ...
	I0828 18:30:22.322640   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:30:22.322739   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:30:22.322843   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:30:22.322939   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:30:22.323154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:30:22.323234   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:30:22.323320   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323518   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323616   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323851   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323947   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324157   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324215   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324383   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324448   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324605   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324614   77396 kubeadm.go:310] 
	I0828 18:30:22.324651   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:30:22.324685   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:30:22.324694   77396 kubeadm.go:310] 
	I0828 18:30:22.324726   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:30:22.324755   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:30:22.324846   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:30:22.324853   77396 kubeadm.go:310] 
	I0828 18:30:22.324939   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:30:22.324971   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:30:22.325003   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:30:22.325009   77396 kubeadm.go:310] 
	I0828 18:30:22.325137   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:30:22.325259   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:30:22.325271   77396 kubeadm.go:310] 
	I0828 18:30:22.325394   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:30:22.325485   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:30:22.325599   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:30:22.325707   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:30:22.325725   77396 kubeadm.go:310] 
	I0828 18:30:22.325793   77396 kubeadm.go:394] duration metric: took 8m1.985321645s to StartCluster
	I0828 18:30:22.325845   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:30:22.325912   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:30:22.369637   77396 cri.go:89] found id: ""
	I0828 18:30:22.369669   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.369680   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:30:22.369687   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:30:22.369748   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:30:22.404363   77396 cri.go:89] found id: ""
	I0828 18:30:22.404395   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.404404   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:30:22.404412   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:30:22.404477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:30:22.439923   77396 cri.go:89] found id: ""
	I0828 18:30:22.439949   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.439956   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:30:22.439962   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:30:22.440016   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:30:22.480139   77396 cri.go:89] found id: ""
	I0828 18:30:22.480169   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.480186   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:30:22.480195   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:30:22.480255   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:30:22.517020   77396 cri.go:89] found id: ""
	I0828 18:30:22.517053   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.517064   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:30:22.517075   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:30:22.517151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:30:22.551369   77396 cri.go:89] found id: ""
	I0828 18:30:22.551391   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.551399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:30:22.551409   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:30:22.551458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:30:22.585656   77396 cri.go:89] found id: ""
	I0828 18:30:22.585686   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.585697   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:30:22.585704   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:30:22.585781   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:30:22.620157   77396 cri.go:89] found id: ""
	I0828 18:30:22.620190   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.620201   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:30:22.620212   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:30:22.620230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:30:22.634209   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:30:22.634245   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:30:22.711047   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:30:22.711082   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:30:22.711096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:30:22.816037   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:30:22.816075   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:30:22.885999   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:30:22.886029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 18:30:22.936793   77396 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0828 18:30:22.936856   77396 out.go:270] * 
	* 
	W0828 18:30:22.936920   77396 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.936941   77396 out.go:270] * 
	* 
	W0828 18:30:22.937749   77396 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:30:22.941026   77396 out.go:201] 
	W0828 18:30:22.942189   77396 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.942300   77396 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0828 18:30:22.942335   77396 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0828 18:30:22.943829   77396 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-131737 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 2 (225.101347ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-131737 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-131737 logs -n 25: (1.576510584s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo find                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo crio                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-647068                                       | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:14 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-072854             | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-014980            | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-640552  | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-072854                  | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC | 28 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-131737        | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-014980                 | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-640552       | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-131737             | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 18:18:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 18:18:45.197319   77396 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:18:45.197606   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197616   77396 out.go:358] Setting ErrFile to fd 2...
	I0828 18:18:45.197621   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197793   77396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:18:45.198351   77396 out.go:352] Setting JSON to false
	I0828 18:18:45.199218   77396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7271,"bootTime":1724861854,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:18:45.199316   77396 start.go:139] virtualization: kvm guest
	I0828 18:18:45.201168   77396 out.go:177] * [old-k8s-version-131737] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:18:45.202252   77396 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:18:45.202312   77396 notify.go:220] Checking for updates...
	I0828 18:18:45.204563   77396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:18:45.205713   77396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:18:45.206652   77396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:18:45.207806   77396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:18:45.208891   77396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:18:45.210308   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:18:45.210717   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.210780   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.225409   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0828 18:18:45.225806   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.226318   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.226338   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.226722   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.226903   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.228685   77396 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 18:18:45.229863   77396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:18:45.230199   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.230243   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.245150   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0828 18:18:45.245641   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.246164   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.246199   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.246486   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.246677   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.282499   77396 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 18:18:45.283789   77396 start.go:297] selected driver: kvm2
	I0828 18:18:45.283804   77396 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.283918   77396 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:18:45.284594   77396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.284693   77396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:18:45.299887   77396 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:18:45.300236   77396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:18:45.300266   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:18:45.300274   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:18:45.300308   77396 start.go:340] cluster config:
	{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.300419   77396 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.302883   77396 out.go:177] * Starting "old-k8s-version-131737" primary control-plane node in "old-k8s-version-131737" cluster
	I0828 18:18:41.610368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:44.682293   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:45.304152   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:18:45.304189   77396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:18:45.304208   77396 cache.go:56] Caching tarball of preloaded images
	I0828 18:18:45.304295   77396 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:18:45.304305   77396 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0828 18:18:45.304426   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:18:45.304608   77396 start.go:360] acquireMachinesLock for old-k8s-version-131737: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:18:50.762367   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:53.834404   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:59.914331   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:02.986351   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:09.066375   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:12.138382   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:18.218324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:21.290324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:27.370327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:30.442342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:36.522377   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:39.594396   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:45.674327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:48.746316   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:54.826346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:57.898388   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:03.978342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:07.050322   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:13.130368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:16.202305   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:22.282326   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:25.354374   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:31.434381   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:34.506312   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:40.586353   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:43.658361   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:49.738343   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:52.810329   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:58.890346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:01.962342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:08.042323   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:11.114385   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:14.118406   76435 start.go:364] duration metric: took 4m0.584080771s to acquireMachinesLock for "embed-certs-014980"
	I0828 18:21:14.118470   76435 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:14.118492   76435 fix.go:54] fixHost starting: 
	I0828 18:21:14.118808   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:14.118834   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:14.134434   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0828 18:21:14.134863   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:14.135369   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:21:14.135398   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:14.135717   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:14.135891   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:14.136052   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:21:14.137681   76435 fix.go:112] recreateIfNeeded on embed-certs-014980: state=Stopped err=<nil>
	I0828 18:21:14.137705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	W0828 18:21:14.137861   76435 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:14.139602   76435 out.go:177] * Restarting existing kvm2 VM for "embed-certs-014980" ...
	I0828 18:21:14.116153   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:14.116188   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116549   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:21:14.116581   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116758   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:21:14.118261   75908 machine.go:96] duration metric: took 4m37.42460751s to provisionDockerMachine
	I0828 18:21:14.118302   75908 fix.go:56] duration metric: took 4m37.4457415s for fixHost
	I0828 18:21:14.118309   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 4m37.445770955s
	W0828 18:21:14.118326   75908 start.go:714] error starting host: provision: host is not running
	W0828 18:21:14.118418   75908 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0828 18:21:14.118430   75908 start.go:729] Will try again in 5 seconds ...
	I0828 18:21:14.140812   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Start
	I0828 18:21:14.140967   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring networks are active...
	I0828 18:21:14.141716   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network default is active
	I0828 18:21:14.142021   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network mk-embed-certs-014980 is active
	I0828 18:21:14.142397   76435 main.go:141] libmachine: (embed-certs-014980) Getting domain xml...
	I0828 18:21:14.143109   76435 main.go:141] libmachine: (embed-certs-014980) Creating domain...
	I0828 18:21:15.352062   76435 main.go:141] libmachine: (embed-certs-014980) Waiting to get IP...
	I0828 18:21:15.352991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.353345   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.353418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.353319   77926 retry.go:31] will retry after 289.130703ms: waiting for machine to come up
	I0828 18:21:15.644017   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.644460   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.644482   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.644434   77926 retry.go:31] will retry after 240.747341ms: waiting for machine to come up
	I0828 18:21:15.886897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.887308   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.887340   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.887258   77926 retry.go:31] will retry after 467.167731ms: waiting for machine to come up
	I0828 18:21:16.355790   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.356204   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.356232   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.356160   77926 retry.go:31] will retry after 506.51967ms: waiting for machine to come up
	I0828 18:21:16.863907   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.864309   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.864343   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.864264   77926 retry.go:31] will retry after 458.679357ms: waiting for machine to come up
	I0828 18:21:17.324948   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.325436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.325462   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.325385   77926 retry.go:31] will retry after 604.433375ms: waiting for machine to come up
	I0828 18:21:17.931169   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.931568   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.931614   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.931507   77926 retry.go:31] will retry after 852.10168ms: waiting for machine to come up
	I0828 18:21:19.120844   75908 start.go:360] acquireMachinesLock for no-preload-072854: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:21:18.785312   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:18.785735   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:18.785762   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:18.785682   77926 retry.go:31] will retry after 1.332568679s: waiting for machine to come up
	I0828 18:21:20.119550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:20.119990   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:20.120016   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:20.119947   77926 retry.go:31] will retry after 1.606559109s: waiting for machine to come up
	I0828 18:21:21.727719   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:21.728147   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:21.728175   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:21.728091   77926 retry.go:31] will retry after 1.901370923s: waiting for machine to come up
	I0828 18:21:23.632187   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:23.632554   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:23.632578   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:23.632509   77926 retry.go:31] will retry after 2.387413646s: waiting for machine to come up
	I0828 18:21:26.022576   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:26.022902   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:26.022924   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:26.022862   77926 retry.go:31] will retry after 3.196331032s: waiting for machine to come up
	I0828 18:21:33.374810   76486 start.go:364] duration metric: took 4m17.539072759s to acquireMachinesLock for "default-k8s-diff-port-640552"
	I0828 18:21:33.374877   76486 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:33.374898   76486 fix.go:54] fixHost starting: 
	I0828 18:21:33.375317   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:33.375357   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:33.392734   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0828 18:21:33.393239   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:33.393761   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:21:33.393783   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:33.394131   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:33.394347   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:33.394547   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:21:33.395998   76486 fix.go:112] recreateIfNeeded on default-k8s-diff-port-640552: state=Stopped err=<nil>
	I0828 18:21:33.396038   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	W0828 18:21:33.396210   76486 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:33.398362   76486 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-640552" ...
	I0828 18:21:29.220396   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:29.220861   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:29.220897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:29.220820   77926 retry.go:31] will retry after 2.802196616s: waiting for machine to come up
	I0828 18:21:32.026808   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027298   76435 main.go:141] libmachine: (embed-certs-014980) Found IP for machine: 192.168.72.130
	I0828 18:21:32.027319   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has current primary IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027325   76435 main.go:141] libmachine: (embed-certs-014980) Reserving static IP address...
	I0828 18:21:32.027698   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.027764   76435 main.go:141] libmachine: (embed-certs-014980) DBG | skip adding static IP to network mk-embed-certs-014980 - found existing host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"}
	I0828 18:21:32.027781   76435 main.go:141] libmachine: (embed-certs-014980) Reserved static IP address: 192.168.72.130
	I0828 18:21:32.027800   76435 main.go:141] libmachine: (embed-certs-014980) Waiting for SSH to be available...
	I0828 18:21:32.027814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Getting to WaitForSSH function...
	I0828 18:21:32.029740   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030020   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.030051   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030171   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH client type: external
	I0828 18:21:32.030200   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa (-rw-------)
	I0828 18:21:32.030235   76435 main.go:141] libmachine: (embed-certs-014980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:32.030249   76435 main.go:141] libmachine: (embed-certs-014980) DBG | About to run SSH command:
	I0828 18:21:32.030264   76435 main.go:141] libmachine: (embed-certs-014980) DBG | exit 0
	I0828 18:21:32.153760   76435 main.go:141] libmachine: (embed-certs-014980) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:32.154184   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetConfigRaw
	I0828 18:21:32.154807   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.157116   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157449   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.157472   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157662   76435 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/config.json ...
	I0828 18:21:32.157857   76435 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:32.157873   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:32.158051   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.160224   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160516   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.160550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.160877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.160999   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.161141   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.161310   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.161509   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.161528   76435 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:32.270041   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:32.270070   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270351   76435 buildroot.go:166] provisioning hostname "embed-certs-014980"
	I0828 18:21:32.270375   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270568   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.273124   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273480   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.273509   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273626   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.273774   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.273941   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.274062   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.274264   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.274435   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.274448   76435 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-014980 && echo "embed-certs-014980" | sudo tee /etc/hostname
	I0828 18:21:32.401452   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-014980
	
	I0828 18:21:32.401473   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.404278   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404622   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.404672   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404785   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.405012   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405167   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405312   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.405525   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.405697   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.405714   76435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-014980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-014980/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-014980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:32.523970   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:32.523997   76435 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:32.524044   76435 buildroot.go:174] setting up certificates
	I0828 18:21:32.524054   76435 provision.go:84] configureAuth start
	I0828 18:21:32.524063   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.524374   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.527040   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527391   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.527418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527540   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.529680   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.529986   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.530006   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.530170   76435 provision.go:143] copyHostCerts
	I0828 18:21:32.530220   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:32.530237   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:32.530306   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:32.530387   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:32.530399   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:32.530423   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:32.530475   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:32.530481   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:32.530502   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:32.530556   76435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.embed-certs-014980 san=[127.0.0.1 192.168.72.130 embed-certs-014980 localhost minikube]
	I0828 18:21:32.755911   76435 provision.go:177] copyRemoteCerts
	I0828 18:21:32.755967   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:32.755990   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.758640   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.758944   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.758981   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.759123   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.759306   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.759442   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.759554   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:32.843219   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:32.867929   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0828 18:21:32.890143   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:32.911983   76435 provision.go:87] duration metric: took 387.917809ms to configureAuth
	I0828 18:21:32.912013   76435 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:32.912199   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:32.912281   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.914814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915154   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.915188   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915321   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.915550   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915717   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915899   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.916116   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.916323   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.916378   76435 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:33.137477   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:33.137500   76435 machine.go:96] duration metric: took 979.632081ms to provisionDockerMachine
	I0828 18:21:33.137513   76435 start.go:293] postStartSetup for "embed-certs-014980" (driver="kvm2")
	I0828 18:21:33.137526   76435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:33.137564   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.137847   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:33.137877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.140267   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140555   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.140584   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140731   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.140922   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.141078   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.141223   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.224499   76435 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:33.228643   76435 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:33.228672   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:33.228755   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:33.228855   76435 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:33.229038   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:33.238208   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:33.260348   76435 start.go:296] duration metric: took 122.819807ms for postStartSetup
	I0828 18:21:33.260400   76435 fix.go:56] duration metric: took 19.141917324s for fixHost
	I0828 18:21:33.260424   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.262763   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263139   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.263168   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263289   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.263482   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263659   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263871   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.264050   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:33.264216   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:33.264226   76435 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:33.374640   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869293.352212530
	
	I0828 18:21:33.374664   76435 fix.go:216] guest clock: 1724869293.352212530
	I0828 18:21:33.374687   76435 fix.go:229] Guest: 2024-08-28 18:21:33.35221253 +0000 UTC Remote: 2024-08-28 18:21:33.260405829 +0000 UTC m=+259.867297948 (delta=91.806701ms)
	I0828 18:21:33.374708   76435 fix.go:200] guest clock delta is within tolerance: 91.806701ms
	I0828 18:21:33.374713   76435 start.go:83] releasing machines lock for "embed-certs-014980", held for 19.256266619s
	I0828 18:21:33.374735   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.374991   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:33.377975   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378411   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.378436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378623   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379150   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379317   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379409   76435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:33.379465   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.379568   76435 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:33.379594   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.381991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382015   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382323   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382354   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382379   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382438   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382493   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382687   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382876   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382907   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383029   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383033   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.383145   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.508142   76435 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:33.514436   76435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:33.661055   76435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:33.666718   76435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:33.666774   76435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:33.683142   76435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:33.683169   76435 start.go:495] detecting cgroup driver to use...
	I0828 18:21:33.683253   76435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:33.698356   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:33.711626   76435 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:33.711690   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:33.724743   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:33.738782   76435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:33.852946   76435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:33.990370   76435 docker.go:233] disabling docker service ...
	I0828 18:21:33.990440   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:34.004746   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:34.017220   76435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:34.174534   76435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:34.320863   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:34.333880   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:34.351859   76435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:34.351907   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.362142   76435 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:34.362223   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.372261   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.382374   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.396994   76435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:34.412126   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.422585   76435 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.439314   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.449667   76435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:34.458389   76435 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:34.458449   76435 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:34.471501   76435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:34.480915   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:34.617633   76435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:34.731432   76435 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:34.731508   76435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:34.736417   76435 start.go:563] Will wait 60s for crictl version
	I0828 18:21:34.736464   76435 ssh_runner.go:195] Run: which crictl
	I0828 18:21:34.740213   76435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:34.776804   76435 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:34.776908   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.806826   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.837961   76435 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:33.399527   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Start
	I0828 18:21:33.399696   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring networks are active...
	I0828 18:21:33.400382   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network default is active
	I0828 18:21:33.400737   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network mk-default-k8s-diff-port-640552 is active
	I0828 18:21:33.401099   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Getting domain xml...
	I0828 18:21:33.401809   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Creating domain...
	I0828 18:21:34.684850   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting to get IP...
	I0828 18:21:34.685612   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.685998   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.686063   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.685980   78067 retry.go:31] will retry after 291.65765ms: waiting for machine to come up
	I0828 18:21:34.979550   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980029   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980051   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.979993   78067 retry.go:31] will retry after 274.75755ms: waiting for machine to come up
	I0828 18:21:35.256257   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256724   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256752   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.256666   78067 retry.go:31] will retry after 455.404257ms: waiting for machine to come up
	I0828 18:21:35.714147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714683   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714716   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.714635   78067 retry.go:31] will retry after 426.56406ms: waiting for machine to come up
	I0828 18:21:34.839157   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:34.842000   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842417   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:34.842443   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842650   76435 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:34.846628   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:34.859098   76435 kubeadm.go:883] updating cluster {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:34.859212   76435 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:34.859259   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:34.898150   76435 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:34.898233   76435 ssh_runner.go:195] Run: which lz4
	I0828 18:21:34.902220   76435 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:34.906463   76435 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:34.906498   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:36.168426   76435 crio.go:462] duration metric: took 1.26624881s to copy over tarball
	I0828 18:21:36.168514   76435 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:38.266205   76435 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.097659696s)
	I0828 18:21:38.266252   76435 crio.go:469] duration metric: took 2.097775234s to extract the tarball
	I0828 18:21:38.266264   76435 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:38.302870   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:38.349495   76435 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:38.349527   76435 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:38.349538   76435 kubeadm.go:934] updating node { 192.168.72.130 8443 v1.31.0 crio true true} ...
	I0828 18:21:38.349672   76435 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-014980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:38.349761   76435 ssh_runner.go:195] Run: crio config
	I0828 18:21:38.393310   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:38.393333   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:38.393346   76435 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:38.393367   76435 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-014980 NodeName:embed-certs-014980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:38.393502   76435 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-014980"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:38.393561   76435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:38.403059   76435 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:38.403128   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:38.411944   76435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0828 18:21:38.427006   76435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:36.143403   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143961   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.143901   78067 retry.go:31] will retry after 623.404625ms: waiting for machine to come up
	I0828 18:21:36.768738   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769339   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.769256   78067 retry.go:31] will retry after 750.082443ms: waiting for machine to come up
	I0828 18:21:37.521122   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521604   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521633   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:37.521562   78067 retry.go:31] will retry after 837.989492ms: waiting for machine to come up
	I0828 18:21:38.361659   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362111   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362140   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:38.362056   78067 retry.go:31] will retry after 1.13122193s: waiting for machine to come up
	I0828 18:21:39.495248   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495643   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495673   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:39.495578   78067 retry.go:31] will retry after 1.180862235s: waiting for machine to come up
	I0828 18:21:40.677748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678090   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678117   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:40.678045   78067 retry.go:31] will retry after 2.245023454s: waiting for machine to come up
	I0828 18:21:38.441960   76435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0828 18:21:38.457509   76435 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:38.461003   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:38.472232   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:38.591387   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:38.606911   76435 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980 for IP: 192.168.72.130
	I0828 18:21:38.606935   76435 certs.go:194] generating shared ca certs ...
	I0828 18:21:38.606957   76435 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:38.607122   76435 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:38.607186   76435 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:38.607199   76435 certs.go:256] generating profile certs ...
	I0828 18:21:38.607304   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/client.key
	I0828 18:21:38.607398   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key.f4b1f9f1
	I0828 18:21:38.607449   76435 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key
	I0828 18:21:38.607595   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:38.607634   76435 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:38.607646   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:38.607679   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:38.607726   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:38.607756   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:38.607808   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:38.608698   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:38.647796   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:38.685835   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:38.738515   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:38.769248   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0828 18:21:38.795091   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:38.816857   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:38.839153   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:38.861024   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:38.882488   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:38.905023   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:38.927997   76435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:38.945870   76435 ssh_runner.go:195] Run: openssl version
	I0828 18:21:38.951753   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:38.962635   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966847   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966895   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.972529   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:38.982689   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:38.992812   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996942   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996991   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:39.002359   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:39.012423   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:39.022765   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.026945   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.027007   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.032233   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:39.042709   76435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:39.046904   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:39.052563   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:39.057937   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:39.063465   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:39.068788   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:39.074233   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:39.079673   76435 kubeadm.go:392] StartCluster: {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:39.079776   76435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:39.079824   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.120250   76435 cri.go:89] found id: ""
	I0828 18:21:39.120331   76435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:39.130147   76435 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:39.130171   76435 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:39.130223   76435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:39.139586   76435 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:39.140642   76435 kubeconfig.go:125] found "embed-certs-014980" server: "https://192.168.72.130:8443"
	I0828 18:21:39.142695   76435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:39.152102   76435 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I0828 18:21:39.152136   76435 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:39.152149   76435 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:39.152225   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.189811   76435 cri.go:89] found id: ""
	I0828 18:21:39.189899   76435 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:39.205579   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:39.215378   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:39.215401   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:39.215451   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:21:39.225068   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:39.225136   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:39.234254   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:21:39.243009   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:39.243072   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:39.252251   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.261241   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:39.261314   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.270443   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:21:39.278999   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:39.279070   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:39.288033   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:39.297331   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:39.396232   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.225819   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.420586   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.482893   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.601563   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:40.601672   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.101955   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.602572   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.102180   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.602520   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.635705   76435 api_server.go:72] duration metric: took 2.034151361s to wait for apiserver process to appear ...
	I0828 18:21:42.635738   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:21:42.635762   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.636263   76435 api_server.go:269] stopped: https://192.168.72.130:8443/healthz: Get "https://192.168.72.130:8443/healthz": dial tcp 192.168.72.130:8443: connect: connection refused
	I0828 18:21:43.136019   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.925748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926265   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926293   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:42.926217   78067 retry.go:31] will retry after 2.565646238s: waiting for machine to come up
	I0828 18:21:45.494477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495032   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495058   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:45.494982   78067 retry.go:31] will retry after 2.418376782s: waiting for machine to come up
	I0828 18:21:45.980398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:45.980429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:45.980444   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.010352   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:46.010385   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:46.136576   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.141398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.141429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:46.635898   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.641672   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.641712   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.136295   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.142623   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:47.142657   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.636199   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.640325   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:21:47.647198   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:21:47.647226   76435 api_server.go:131] duration metric: took 5.011481159s to wait for apiserver health ...
	I0828 18:21:47.647236   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:47.647245   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:47.649638   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:21:47.650998   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:21:47.662361   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:21:47.683446   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:21:47.696066   76435 system_pods.go:59] 8 kube-system pods found
	I0828 18:21:47.696100   76435 system_pods.go:61] "coredns-6f6b679f8f-4g2n8" [9c34e013-4c11-4805-9d58-987bb130f1b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:21:47.696120   76435 system_pods.go:61] "etcd-embed-certs-014980" [164f2ce3-8df6-4e56-a959-80b08848a181] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:21:47.696133   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [c637e3e0-4e54-44b1-8eb7-ea11d3b5753a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:21:47.696143   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [2d786cc0-a0c7-430c-89e1-9889e919289d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:21:47.696149   76435 system_pods.go:61] "kube-proxy-4lz5q" [a5f2213b-6b36-4656-8a26-26903bc09441] Running
	I0828 18:21:47.696158   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [2aa3787a-7a70-4cfb-8810-9f4e02240012] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:21:47.696167   76435 system_pods.go:61] "metrics-server-6867b74b74-f56j2" [91d30fa3-cc63-4d61-8cd3-46ecc950c31f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:21:47.696176   76435 system_pods.go:61] "storage-provisioner" [54d357f5-8f8a-429b-81db-40c9f2857fde] Running
	I0828 18:21:47.696185   76435 system_pods.go:74] duration metric: took 12.718326ms to wait for pod list to return data ...
	I0828 18:21:47.696198   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:21:47.699492   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:21:47.699515   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:21:47.699528   76435 node_conditions.go:105] duration metric: took 3.324668ms to run NodePressure ...
	I0828 18:21:47.699548   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:47.970122   76435 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973854   76435 kubeadm.go:739] kubelet initialised
	I0828 18:21:47.973874   76435 kubeadm.go:740] duration metric: took 3.724056ms waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973881   76435 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:21:47.978377   76435 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:21:47.916599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.916976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.917015   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:47.916941   78067 retry.go:31] will retry after 3.1564792s: waiting for machine to come up
	I0828 18:21:52.286982   77396 start.go:364] duration metric: took 3m6.98234152s to acquireMachinesLock for "old-k8s-version-131737"
	I0828 18:21:52.287057   77396 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:52.287069   77396 fix.go:54] fixHost starting: 
	I0828 18:21:52.287554   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:52.287595   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:52.305954   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0828 18:21:52.306439   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:52.306908   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:21:52.306928   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:52.307228   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:52.307404   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:21:52.307571   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetState
	I0828 18:21:52.309284   77396 fix.go:112] recreateIfNeeded on old-k8s-version-131737: state=Stopped err=<nil>
	I0828 18:21:52.309322   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	W0828 18:21:52.309508   77396 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:52.311369   77396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-131737" ...
	I0828 18:21:49.984379   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.985536   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.075186   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.075681   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Found IP for machine: 192.168.39.226
	I0828 18:21:51.075698   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserving static IP address...
	I0828 18:21:51.075746   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has current primary IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.076159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.076184   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | skip adding static IP to network mk-default-k8s-diff-port-640552 - found existing host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"}
	I0828 18:21:51.076201   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserved static IP address: 192.168.39.226
	I0828 18:21:51.076218   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for SSH to be available...
	I0828 18:21:51.076230   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Getting to WaitForSSH function...
	I0828 18:21:51.078435   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078745   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.078766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078967   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH client type: external
	I0828 18:21:51.079000   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa (-rw-------)
	I0828 18:21:51.079053   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:51.079079   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | About to run SSH command:
	I0828 18:21:51.079114   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | exit 0
	I0828 18:21:51.205844   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:51.206145   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetConfigRaw
	I0828 18:21:51.206821   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.209159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.209563   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209753   76486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/config.json ...
	I0828 18:21:51.209980   76486 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:51.209999   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:51.210244   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.212321   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212651   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.212677   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212800   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.212971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213273   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.213408   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.213639   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.213650   76486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:51.330211   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:51.330249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330530   76486 buildroot.go:166] provisioning hostname "default-k8s-diff-port-640552"
	I0828 18:21:51.330558   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330820   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.333492   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.333855   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.333885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.334027   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.334249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334469   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334658   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.334844   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.335003   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.335015   76486 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-640552 && echo "default-k8s-diff-port-640552" | sudo tee /etc/hostname
	I0828 18:21:51.459660   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-640552
	
	I0828 18:21:51.459690   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.462286   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462636   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.462668   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462842   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.463034   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463181   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463307   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.463470   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.463650   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.463682   76486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-640552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-640552/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-640552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:51.581714   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:51.581740   76486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:51.581777   76486 buildroot.go:174] setting up certificates
	I0828 18:21:51.581792   76486 provision.go:84] configureAuth start
	I0828 18:21:51.581807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.582130   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.584626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.584945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.584976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.585073   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.587285   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587672   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.587700   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587868   76486 provision.go:143] copyHostCerts
	I0828 18:21:51.587926   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:51.587946   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:51.588003   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:51.588092   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:51.588100   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:51.588124   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:51.588244   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:51.588255   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:51.588277   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:51.588332   76486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-640552 san=[127.0.0.1 192.168.39.226 default-k8s-diff-port-640552 localhost minikube]
	I0828 18:21:51.657408   76486 provision.go:177] copyRemoteCerts
	I0828 18:21:51.657457   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:51.657480   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.660152   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660494   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.660514   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660709   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.660911   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.661078   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.661251   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:51.751729   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:51.773473   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0828 18:21:51.796174   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:51.817640   76486 provision.go:87] duration metric: took 235.828003ms to configureAuth
	I0828 18:21:51.817672   76486 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:51.817892   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:51.817983   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.820433   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.820780   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.820807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.821016   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.821214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821371   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821533   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.821684   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.821852   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.821870   76486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:52.048026   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:52.048055   76486 machine.go:96] duration metric: took 838.061836ms to provisionDockerMachine
	I0828 18:21:52.048067   76486 start.go:293] postStartSetup for "default-k8s-diff-port-640552" (driver="kvm2")
	I0828 18:21:52.048078   76486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:52.048097   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.048437   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:52.048472   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.051115   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051385   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.051410   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051597   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.051815   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.051971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.052066   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.136350   76486 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:52.140200   76486 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:52.140228   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:52.140303   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:52.140397   76486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:52.140496   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:52.149451   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:52.172381   76486 start.go:296] duration metric: took 124.302384ms for postStartSetup
	I0828 18:21:52.172450   76486 fix.go:56] duration metric: took 18.797536411s for fixHost
	I0828 18:21:52.172477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.174891   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175255   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.175274   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175474   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.175631   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175788   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.176100   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:52.176279   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:52.176289   76486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:52.286801   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869312.259614140
	
	I0828 18:21:52.286827   76486 fix.go:216] guest clock: 1724869312.259614140
	I0828 18:21:52.286835   76486 fix.go:229] Guest: 2024-08-28 18:21:52.25961414 +0000 UTC Remote: 2024-08-28 18:21:52.172457684 +0000 UTC m=+276.471609311 (delta=87.156456ms)
	I0828 18:21:52.286854   76486 fix.go:200] guest clock delta is within tolerance: 87.156456ms
	I0828 18:21:52.286859   76486 start.go:83] releasing machines lock for "default-k8s-diff-port-640552", held for 18.912007294s
	I0828 18:21:52.286884   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.287148   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:52.289951   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290346   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.290370   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290500   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.290976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291228   76486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:52.291282   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.291325   76486 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:52.291344   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.294010   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294039   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294464   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294490   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294637   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294685   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294896   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295185   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295331   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295326   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.295560   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.380284   76486 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:52.421868   76486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:52.563478   76486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:52.569318   76486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:52.569408   76486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:52.585683   76486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:52.585709   76486 start.go:495] detecting cgroup driver to use...
	I0828 18:21:52.585781   76486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:52.603511   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:52.616868   76486 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:52.616930   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:52.631574   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:52.644913   76486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:52.762863   76486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:52.920107   76486 docker.go:233] disabling docker service ...
	I0828 18:21:52.920183   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:52.937155   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:52.951124   76486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:53.063496   76486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:53.187655   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:53.201452   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:53.219663   76486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:53.219734   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.230165   76486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:53.230247   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.240480   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.251258   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.262763   76486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:53.273597   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.283571   76486 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.302935   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.313508   76486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:53.322781   76486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:53.322850   76486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:53.337049   76486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:53.347349   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:53.455027   76486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:53.551547   76486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:53.551607   76486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:53.556960   76486 start.go:563] Will wait 60s for crictl version
	I0828 18:21:53.557066   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:21:53.560695   76486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:53.603636   76486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:53.603721   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.632017   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.664760   76486 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:52.312648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .Start
	I0828 18:21:52.312862   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring networks are active...
	I0828 18:21:52.313682   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network default is active
	I0828 18:21:52.314112   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network mk-old-k8s-version-131737 is active
	I0828 18:21:52.314488   77396 main.go:141] libmachine: (old-k8s-version-131737) Getting domain xml...
	I0828 18:21:52.315180   77396 main.go:141] libmachine: (old-k8s-version-131737) Creating domain...
	I0828 18:21:53.582013   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting to get IP...
	I0828 18:21:53.583124   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.583609   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.583672   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.583582   78246 retry.go:31] will retry after 289.679773ms: waiting for machine to come up
	I0828 18:21:53.875299   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.876115   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.876144   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.876051   78246 retry.go:31] will retry after 263.317798ms: waiting for machine to come up
	I0828 18:21:54.141733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.142310   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.142340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.142257   78246 retry.go:31] will retry after 440.224905ms: waiting for machine to come up
	I0828 18:21:54.584505   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.585061   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.585084   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.585018   78246 retry.go:31] will retry after 379.546405ms: waiting for machine to come up
	I0828 18:21:54.966516   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.967130   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.967153   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.967045   78246 retry.go:31] will retry after 754.463377ms: waiting for machine to come up
	I0828 18:21:53.665810   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:53.668882   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669330   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:53.669352   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669589   76486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:53.673693   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:53.685432   76486 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:53.685546   76486 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:53.685593   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:53.720069   76486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:53.720129   76486 ssh_runner.go:195] Run: which lz4
	I0828 18:21:53.723841   76486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:53.727666   76486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:53.727697   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:54.993725   76486 crio.go:462] duration metric: took 1.269921848s to copy over tarball
	I0828 18:21:54.993802   76486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:53.987677   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:56.485568   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:55.723533   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:55.724021   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:55.724042   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:55.723980   78246 retry.go:31] will retry after 607.743145ms: waiting for machine to come up
	I0828 18:21:56.333733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:56.334181   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:56.334210   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:56.334135   78246 retry.go:31] will retry after 1.098394488s: waiting for machine to come up
	I0828 18:21:57.433729   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:57.434212   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:57.434243   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:57.434157   78246 retry.go:31] will retry after 1.195993343s: waiting for machine to come up
	I0828 18:21:58.631451   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:58.631839   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:58.631867   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:58.631798   78246 retry.go:31] will retry after 1.807712472s: waiting for machine to come up
	I0828 18:21:57.135009   76486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.141177811s)
	I0828 18:21:57.135041   76486 crio.go:469] duration metric: took 2.141292479s to extract the tarball
	I0828 18:21:57.135051   76486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:57.172381   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:57.211971   76486 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:57.211993   76486 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:57.212003   76486 kubeadm.go:934] updating node { 192.168.39.226 8444 v1.31.0 crio true true} ...
	I0828 18:21:57.212123   76486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-640552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:57.212202   76486 ssh_runner.go:195] Run: crio config
	I0828 18:21:57.254347   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:21:57.254378   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:57.254402   76486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:57.254431   76486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-640552 NodeName:default-k8s-diff-port-640552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:57.254630   76486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-640552"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:57.254715   76486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:57.264233   76486 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:57.264304   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:57.273293   76486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0828 18:21:57.289211   76486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:57.304829   76486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0828 18:21:57.323062   76486 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:57.326891   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:57.339775   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:57.463802   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:57.479266   76486 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552 for IP: 192.168.39.226
	I0828 18:21:57.479288   76486 certs.go:194] generating shared ca certs ...
	I0828 18:21:57.479325   76486 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:57.479519   76486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:57.479570   76486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:57.479584   76486 certs.go:256] generating profile certs ...
	I0828 18:21:57.479702   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/client.key
	I0828 18:21:57.479774   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key.90f46fd7
	I0828 18:21:57.479829   76486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key
	I0828 18:21:57.479977   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:57.480018   76486 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:57.480031   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:57.480071   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:57.480109   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:57.480142   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:57.480199   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:57.481063   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:57.514802   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:57.555506   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:57.585381   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:57.613009   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0828 18:21:57.637776   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:57.662590   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:57.684482   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:57.707287   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:57.728392   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:57.750217   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:57.771310   76486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:57.786814   76486 ssh_runner.go:195] Run: openssl version
	I0828 18:21:57.792053   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:57.802301   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806552   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806627   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.812238   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:57.824231   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:57.834783   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.838954   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.839008   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.844456   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:57.856262   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:57.867737   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872040   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872122   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.877506   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:57.889018   76486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:57.893303   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:57.899199   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:57.907716   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:57.915801   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:57.923795   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:57.929601   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:57.935563   76486 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:57.935655   76486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:57.935698   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:57.975236   76486 cri.go:89] found id: ""
	I0828 18:21:57.975308   76486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:57.986945   76486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:57.986966   76486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:57.987014   76486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:57.996355   76486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:57.997293   76486 kubeconfig.go:125] found "default-k8s-diff-port-640552" server: "https://192.168.39.226:8444"
	I0828 18:21:57.999257   76486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:58.008531   76486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.226
	I0828 18:21:58.008555   76486 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:58.008564   76486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:58.008612   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:58.054603   76486 cri.go:89] found id: ""
	I0828 18:21:58.054681   76486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:58.072017   76486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:58.085982   76486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:58.086007   76486 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:58.086087   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0828 18:21:58.094721   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:58.094790   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:58.108457   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0828 18:21:58.120495   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:58.120568   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:58.130432   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.139428   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:58.139495   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.148537   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0828 18:21:58.157182   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:58.157241   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:58.166178   76486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:58.175176   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:58.276043   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.072360   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.270937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.344719   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.442568   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:59.442664   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:59.942860   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:00.443271   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:58.485632   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:00.694313   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:00.694341   76435 pod_ready.go:82] duration metric: took 12.71594065s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.694354   76435 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210752   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.210805   76435 pod_ready.go:82] duration metric: took 516.442507ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210821   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218781   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.218809   76435 pod_ready.go:82] duration metric: took 7.979295ms for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218823   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725883   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.725914   76435 pod_ready.go:82] duration metric: took 507.08194ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725932   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731866   76435 pod_ready.go:93] pod "kube-proxy-4lz5q" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.731891   76435 pod_ready.go:82] duration metric: took 5.951323ms for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731903   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737160   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.737191   76435 pod_ready.go:82] duration metric: took 5.279341ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737203   76435 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.441679   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:00.442149   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:00.442178   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:00.442063   78246 retry.go:31] will retry after 2.175897132s: waiting for machine to come up
	I0828 18:22:02.620076   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:02.620562   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:02.620589   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:02.620527   78246 retry.go:31] will retry after 1.749248103s: waiting for machine to come up
	I0828 18:22:04.371390   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:04.371924   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:04.371969   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:04.371875   78246 retry.go:31] will retry after 2.412168623s: waiting for machine to come up
	I0828 18:22:00.943566   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.443708   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.943361   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.957227   76486 api_server.go:72] duration metric: took 2.514666609s to wait for apiserver process to appear ...
	I0828 18:22:01.957258   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:01.957281   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.174923   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.174955   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.174970   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.227506   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.227540   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.457869   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.463842   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.463884   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:04.957398   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.964576   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.964606   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:05.457724   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:05.461808   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:22:05.467732   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:05.467757   76486 api_server.go:131] duration metric: took 3.510492089s to wait for apiserver health ...
	I0828 18:22:05.467766   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:22:05.467771   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:05.469553   76486 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:05.470759   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:05.481858   76486 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:05.500789   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:05.512306   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:05.512336   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:05.512343   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:05.512353   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:05.512360   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:05.512368   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:05.512379   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:05.512386   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:05.512396   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:05.512405   76486 system_pods.go:74] duration metric: took 11.592471ms to wait for pod list to return data ...
	I0828 18:22:05.512419   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:05.516136   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:05.516167   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:05.516182   76486 node_conditions.go:105] duration metric: took 3.757746ms to run NodePressure ...
	I0828 18:22:05.516205   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:05.793448   76486 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798810   76486 kubeadm.go:739] kubelet initialised
	I0828 18:22:05.798827   76486 kubeadm.go:740] duration metric: took 5.35696ms waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798835   76486 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:05.803644   76486 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.808185   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808206   76486 pod_ready.go:82] duration metric: took 4.541551ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.808214   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808226   76486 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.812918   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812941   76486 pod_ready.go:82] duration metric: took 4.703246ms for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.812950   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812956   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.817019   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817036   76486 pod_ready.go:82] duration metric: took 4.075009ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.817045   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817050   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.904575   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904606   76486 pod_ready.go:82] duration metric: took 87.547744ms for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.904621   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904628   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.304141   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304168   76486 pod_ready.go:82] duration metric: took 399.53302ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.304177   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304182   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.704632   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704663   76486 pod_ready.go:82] duration metric: took 400.470144ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.704677   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704686   76486 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:07.104218   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104247   76486 pod_ready.go:82] duration metric: took 399.550393ms for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:07.104261   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104270   76486 pod_ready.go:39] duration metric: took 1.305425633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:07.104296   76486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:07.115055   76486 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:07.115077   76486 kubeadm.go:597] duration metric: took 9.128104649s to restartPrimaryControlPlane
	I0828 18:22:07.115085   76486 kubeadm.go:394] duration metric: took 9.179528813s to StartCluster
	I0828 18:22:07.115105   76486 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.115169   76486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:07.116738   76486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.116962   76486 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:07.117026   76486 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:07.117104   76486 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117121   76486 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117136   76486 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117150   76486 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:07.117175   76486 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-640552"
	I0828 18:22:07.117185   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117191   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:07.117166   76486 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117280   76486 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117291   76486 addons.go:243] addon metrics-server should already be in state true
	I0828 18:22:07.117316   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117551   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117585   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117607   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117622   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117666   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117687   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.118665   76486 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:07.119962   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0828 18:22:07.133468   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133474   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133473   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0828 18:22:07.133904   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.134022   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134039   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134044   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134055   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134378   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134405   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134416   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134425   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134582   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.134742   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134990   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135019   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.135331   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135358   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.142488   76486 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.142508   76486 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:07.142534   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.142790   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.142845   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.151553   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0828 18:22:07.152067   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.152561   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.152578   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.152988   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.153172   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.153267   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0828 18:22:07.153647   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.154071   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.154118   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.154424   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.154657   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.155656   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.156384   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.158167   76486 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:07.158170   76486 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:03.743115   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:06.246448   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:07.159313   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0828 18:22:07.159655   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.159730   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:07.159748   76486 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:07.159766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.159877   76486 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.159893   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:07.159908   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.160069   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.160087   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.160501   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.160999   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.161042   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.163522   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163560   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163954   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163960   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163980   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163989   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.164249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164451   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164455   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164575   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164746   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.164806   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.177679   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0828 18:22:07.178179   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.178711   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.178732   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.179027   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.179214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.180671   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.180897   76486 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.180912   76486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:07.180931   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.183194   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183530   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.183619   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183784   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.183935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.184064   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.184197   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.320359   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:07.338447   76486 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:07.422788   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.478274   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:07.478295   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:07.481718   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.539263   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:07.539287   76486 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:07.610393   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:07.610415   76486 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:07.671875   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:08.436371   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436397   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436468   76486 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.013643707s)
	I0828 18:22:08.436507   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436690   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436708   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436720   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436728   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436823   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.436836   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436848   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436857   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436866   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436939   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436952   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.437124   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.437174   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.437198   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.442852   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.442871   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.443116   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.443135   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601340   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601386   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601681   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.601728   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601743   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601753   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601998   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.602020   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.602030   76486 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-640552"
	I0828 18:22:08.603833   76486 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:06.787073   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:06.787468   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:06.787506   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:06.787418   78246 retry.go:31] will retry after 3.844761666s: waiting for machine to come up
	I0828 18:22:08.605028   76486 addons.go:510] duration metric: took 1.488006928s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:09.342263   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:11.990693   75908 start.go:364] duration metric: took 52.869802321s to acquireMachinesLock for "no-preload-072854"
	I0828 18:22:11.990749   75908 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:22:11.990756   75908 fix.go:54] fixHost starting: 
	I0828 18:22:11.991173   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:11.991211   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:12.008247   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0828 18:22:12.008729   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:12.009170   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:12.009193   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:12.009534   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:12.009732   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:12.009873   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:12.011416   75908 fix.go:112] recreateIfNeeded on no-preload-072854: state=Stopped err=<nil>
	I0828 18:22:12.011442   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	W0828 18:22:12.011599   75908 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:22:12.013401   75908 out.go:177] * Restarting existing kvm2 VM for "no-preload-072854" ...
	I0828 18:22:08.747994   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:11.243666   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:13.245991   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:10.635599   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.635992   77396 main.go:141] libmachine: (old-k8s-version-131737) Found IP for machine: 192.168.50.99
	I0828 18:22:10.636017   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserving static IP address...
	I0828 18:22:10.636035   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has current primary IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.636476   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserved static IP address: 192.168.50.99
	I0828 18:22:10.636507   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting for SSH to be available...
	I0828 18:22:10.636529   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.636550   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | skip adding static IP to network mk-old-k8s-version-131737 - found existing host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"}
	I0828 18:22:10.636565   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Getting to WaitForSSH function...
	I0828 18:22:10.638762   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639118   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.639150   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639274   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH client type: external
	I0828 18:22:10.639295   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa (-rw-------)
	I0828 18:22:10.639324   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:10.639340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | About to run SSH command:
	I0828 18:22:10.639368   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | exit 0
	I0828 18:22:10.765932   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:10.766339   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetConfigRaw
	I0828 18:22:10.767003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:10.769525   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770006   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.770045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770184   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:22:10.770396   77396 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:10.770418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:10.770671   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.772685   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773010   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.773031   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773182   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.773396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773583   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773739   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.773904   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.774112   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.774125   77396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:10.874115   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:10.874150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874366   77396 buildroot.go:166] provisioning hostname "old-k8s-version-131737"
	I0828 18:22:10.874396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874600   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.876804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877106   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.877132   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877237   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.877445   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877604   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877763   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.877921   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.878123   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.878139   77396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-131737 && echo "old-k8s-version-131737" | sudo tee /etc/hostname
	I0828 18:22:10.999107   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-131737
	
	I0828 18:22:10.999144   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.002327   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.002771   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.002802   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.003036   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.003221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003425   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003610   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.003769   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.003968   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.003986   77396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-131737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-131737/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-131737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:11.119461   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:11.119493   77396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:11.119523   77396 buildroot.go:174] setting up certificates
	I0828 18:22:11.119535   77396 provision.go:84] configureAuth start
	I0828 18:22:11.119547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:11.119813   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.122564   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.122916   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.122945   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.123121   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.125575   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.125946   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.125973   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.126103   77396 provision.go:143] copyHostCerts
	I0828 18:22:11.126169   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:11.126192   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:11.126258   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:11.126390   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:11.126416   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:11.126453   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:11.126551   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:11.126565   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:11.126596   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:11.126678   77396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-131737 san=[127.0.0.1 192.168.50.99 localhost minikube old-k8s-version-131737]
	I0828 18:22:11.382096   77396 provision.go:177] copyRemoteCerts
	I0828 18:22:11.382161   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:11.382189   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.384698   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.385071   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.385394   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.385527   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.385669   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.463818   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:11.487677   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0828 18:22:11.510454   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 18:22:11.532302   77396 provision.go:87] duration metric: took 412.75597ms to configureAuth
	I0828 18:22:11.532331   77396 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:11.532551   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:22:11.532627   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.535284   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535668   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.535700   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535816   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.536003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536138   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536317   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.536444   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.536599   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.536626   77396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:11.757267   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:11.757297   77396 machine.go:96] duration metric: took 986.887935ms to provisionDockerMachine
	I0828 18:22:11.757311   77396 start.go:293] postStartSetup for "old-k8s-version-131737" (driver="kvm2")
	I0828 18:22:11.757325   77396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:11.757341   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.757701   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:11.757761   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.760433   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760764   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.760804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760949   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.761117   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.761288   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.761467   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.842091   77396 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:11.846271   77396 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:11.846294   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:11.846357   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:11.846452   77396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:11.846590   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:11.856373   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:11.879153   77396 start.go:296] duration metric: took 121.830018ms for postStartSetup
	I0828 18:22:11.879193   77396 fix.go:56] duration metric: took 19.592124568s for fixHost
	I0828 18:22:11.879218   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.882110   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882588   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.882638   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882814   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.883017   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883241   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883383   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.883540   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.883704   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.883715   77396 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:11.990532   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869331.947970723
	
	I0828 18:22:11.990563   77396 fix.go:216] guest clock: 1724869331.947970723
	I0828 18:22:11.990574   77396 fix.go:229] Guest: 2024-08-28 18:22:11.947970723 +0000 UTC Remote: 2024-08-28 18:22:11.879198847 +0000 UTC m=+206.714077766 (delta=68.771876ms)
	I0828 18:22:11.990599   77396 fix.go:200] guest clock delta is within tolerance: 68.771876ms
	I0828 18:22:11.990605   77396 start.go:83] releasing machines lock for "old-k8s-version-131737", held for 19.703582254s
	I0828 18:22:11.990648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.990935   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.993283   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993690   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.993725   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993908   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994630   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994718   77396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:11.994768   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.994836   77396 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:11.994864   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.997521   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997693   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997952   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.997974   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998001   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.998022   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998251   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998384   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998466   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998650   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998665   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.998813   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:12.079201   77396 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:12.116862   77396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:12.268437   77396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:12.274689   77396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:12.274768   77396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:12.299532   77396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:12.299561   77396 start.go:495] detecting cgroup driver to use...
	I0828 18:22:12.299633   77396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:12.321322   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:12.336273   77396 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:12.336345   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:12.350625   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:12.364155   77396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:12.475639   77396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:12.636052   77396 docker.go:233] disabling docker service ...
	I0828 18:22:12.636144   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:12.655431   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:12.673744   77396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:12.865232   77396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:12.993530   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:13.006666   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:13.023529   77396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0828 18:22:13.023617   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.032944   77396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:13.033014   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.042494   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.052172   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.062869   77396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:13.073254   77396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:13.081968   77396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:13.082032   77396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:13.096163   77396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:13.106942   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:13.229752   77396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:13.333809   77396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:13.333870   77396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:13.339539   77396 start.go:563] Will wait 60s for crictl version
	I0828 18:22:13.339615   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:13.343618   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:13.387552   77396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:13.387647   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.417440   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.451222   77396 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0828 18:22:13.452432   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:13.455750   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456127   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:13.456158   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456465   77396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:13.460719   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:13.474168   77396 kubeadm.go:883] updating cluster {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:13.474315   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:22:13.474381   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:13.519869   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:13.519940   77396 ssh_runner.go:195] Run: which lz4
	I0828 18:22:13.524479   77396 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:22:13.528475   77396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:22:13.528511   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0828 18:22:15.039582   77396 crio.go:462] duration metric: took 1.515144029s to copy over tarball
	I0828 18:22:15.039666   77396 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:22:11.342592   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:13.343159   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:14.844412   76486 node_ready.go:49] node "default-k8s-diff-port-640552" has status "Ready":"True"
	I0828 18:22:14.844443   76486 node_ready.go:38] duration metric: took 7.505958149s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:14.844457   76486 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:14.852970   76486 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858426   76486 pod_ready.go:93] pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:14.858454   76486 pod_ready.go:82] duration metric: took 5.455024ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858467   76486 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:12.014690   75908 main.go:141] libmachine: (no-preload-072854) Calling .Start
	I0828 18:22:12.014870   75908 main.go:141] libmachine: (no-preload-072854) Ensuring networks are active...
	I0828 18:22:12.015716   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network default is active
	I0828 18:22:12.016229   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network mk-no-preload-072854 is active
	I0828 18:22:12.016663   75908 main.go:141] libmachine: (no-preload-072854) Getting domain xml...
	I0828 18:22:12.017534   75908 main.go:141] libmachine: (no-preload-072854) Creating domain...
	I0828 18:22:13.381018   75908 main.go:141] libmachine: (no-preload-072854) Waiting to get IP...
	I0828 18:22:13.381905   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.382463   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.382515   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.382439   78447 retry.go:31] will retry after 308.332294ms: waiting for machine to come up
	I0828 18:22:13.692047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.692496   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.692537   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.692434   78447 retry.go:31] will retry after 374.325088ms: waiting for machine to come up
	I0828 18:22:14.068154   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.068770   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.068799   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.068736   78447 retry.go:31] will retry after 465.939187ms: waiting for machine to come up
	I0828 18:22:14.536497   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.537032   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.537055   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.536989   78447 retry.go:31] will retry after 374.795357ms: waiting for machine to come up
	I0828 18:22:14.913413   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.914015   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.914047   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.913964   78447 retry.go:31] will retry after 726.118647ms: waiting for machine to come up
	I0828 18:22:15.641971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:15.642532   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:15.642559   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:15.642483   78447 retry.go:31] will retry after 951.90632ms: waiting for machine to come up
	I0828 18:22:15.745367   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.244292   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.094470   77396 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.054779864s)
	I0828 18:22:18.094500   77396 crio.go:469] duration metric: took 3.054883651s to extract the tarball
	I0828 18:22:18.094507   77396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:22:18.138235   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:18.172461   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:18.172484   77396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:18.172527   77396 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.172572   77396 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.172589   77396 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.172646   77396 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0828 18:22:18.172819   77396 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.172608   77396 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.172823   77396 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.172990   77396 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174545   77396 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.174579   77396 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.174598   77396 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0828 18:22:18.174609   77396 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.174904   77396 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.415540   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0828 18:22:18.461528   77396 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0828 18:22:18.461577   77396 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0828 18:22:18.461617   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.466065   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.471602   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.476041   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.480111   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.484307   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.500185   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.519236   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.538341   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.614022   77396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0828 18:22:18.614068   77396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.614150   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649875   77396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0828 18:22:18.649927   77396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.649945   77396 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0828 18:22:18.649976   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649980   77396 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.650035   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.665128   77396 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0828 18:22:18.665173   77396 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.665225   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686246   77396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0828 18:22:18.686288   77396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.686303   77396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0828 18:22:18.686336   77396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.686375   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686417   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.686339   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686483   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.686527   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.686558   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.686599   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775824   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775875   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.803911   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.803983   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0828 18:22:18.822129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.822230   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.822232   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.912309   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.912514   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.912662   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:19.003169   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003183   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0828 18:22:19.003201   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:19.003137   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:19.003292   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:19.108957   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0828 18:22:19.109000   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0828 18:22:19.109047   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0828 18:22:19.108961   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0828 18:22:19.109144   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0828 18:22:19.340554   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:19.486655   77396 cache_images.go:92] duration metric: took 1.314154463s to LoadCachedImages
	W0828 18:22:19.486742   77396 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0828 18:22:19.486760   77396 kubeadm.go:934] updating node { 192.168.50.99 8443 v1.20.0 crio true true} ...
	I0828 18:22:19.486898   77396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-131737 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:19.486979   77396 ssh_runner.go:195] Run: crio config
	I0828 18:22:19.530549   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:22:19.530579   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:19.530592   77396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:19.530621   77396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.99 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-131737 NodeName:old-k8s-version-131737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0828 18:22:19.530797   77396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-131737"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:19.530870   77396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0828 18:22:19.545081   77396 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:19.545179   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:19.558002   77396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0828 18:22:19.577056   77396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:19.595848   77396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0828 18:22:19.614164   77396 ssh_runner.go:195] Run: grep 192.168.50.99	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:19.618274   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.99	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:19.631776   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:19.775809   77396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:19.793491   77396 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737 for IP: 192.168.50.99
	I0828 18:22:19.793521   77396 certs.go:194] generating shared ca certs ...
	I0828 18:22:19.793544   77396 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:19.793722   77396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:19.793776   77396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:19.793788   77396 certs.go:256] generating profile certs ...
	I0828 18:22:19.793928   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.key
	I0828 18:22:19.793993   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key.131f8aa0
	I0828 18:22:19.794043   77396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key
	I0828 18:22:19.794211   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:19.794279   77396 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:19.794292   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:19.794322   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:19.794353   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:19.794379   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:19.794447   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:19.795621   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:19.831614   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:19.874281   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:19.927912   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:19.967892   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 18:22:20.010378   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:22:20.036730   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:20.064707   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:22:20.089246   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:20.116913   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:20.151729   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:20.174509   77396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:20.190911   77396 ssh_runner.go:195] Run: openssl version
	I0828 18:22:16.865253   76486 pod_ready.go:103] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:17.867833   76486 pod_ready.go:93] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.867859   76486 pod_ready.go:82] duration metric: took 3.009384484s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.867869   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.875975   76486 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.876008   76486 pod_ready.go:82] duration metric: took 8.131826ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.876022   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883334   76486 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.883363   76486 pod_ready.go:82] duration metric: took 1.007332551s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883377   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890003   76486 pod_ready.go:93] pod "kube-proxy-lmpft" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.890032   76486 pod_ready.go:82] duration metric: took 6.647273ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890045   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895629   76486 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.895658   76486 pod_ready.go:82] duration metric: took 5.60504ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895672   76486 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:16.595708   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:16.596190   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:16.596219   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:16.596152   78447 retry.go:31] will retry after 1.127921402s: waiting for machine to come up
	I0828 18:22:17.725174   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:17.725707   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:17.725736   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:17.725653   78447 retry.go:31] will retry after 959.892711ms: waiting for machine to come up
	I0828 18:22:18.686818   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:18.687269   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:18.687291   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:18.687225   78447 retry.go:31] will retry after 1.541922737s: waiting for machine to come up
	I0828 18:22:20.231099   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:20.231669   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:20.231697   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:20.231621   78447 retry.go:31] will retry after 1.601924339s: waiting for machine to come up
	I0828 18:22:20.743848   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:22.745091   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:20.198369   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:20.208787   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213735   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213798   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.219855   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:20.230970   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:20.243428   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248105   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248169   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.253803   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:20.264495   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:20.275530   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280118   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280179   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.286135   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:20.296995   77396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:20.302843   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:20.309214   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:20.314977   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:20.321177   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:20.327689   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:20.334176   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:20.340478   77396 kubeadm.go:392] StartCluster: {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:20.340589   77396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:20.340666   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.377288   77396 cri.go:89] found id: ""
	I0828 18:22:20.377366   77396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:20.387774   77396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:20.387796   77396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:20.387846   77396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:20.398086   77396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:20.399369   77396 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-131737" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:20.400118   77396 kubeconfig.go:62] /home/jenkins/minikube-integration/19529-10317/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-131737" cluster setting kubeconfig missing "old-k8s-version-131737" context setting]
	I0828 18:22:20.401248   77396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:20.464577   77396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:20.475116   77396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.99
	I0828 18:22:20.475161   77396 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:20.475172   77396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:20.475233   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.509801   77396 cri.go:89] found id: ""
	I0828 18:22:20.509881   77396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:20.527245   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:20.537526   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:20.537548   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:20.537603   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:20.546096   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:20.546168   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:20.555608   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:20.564344   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:20.564405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:20.573551   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.582191   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:20.582248   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.592105   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:20.601563   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:20.601624   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:20.612220   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:20.621113   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:20.738800   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.351223   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.564678   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.659764   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.748789   77396 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:21.748886   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.249370   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.749578   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.249982   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.749304   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.249774   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.749363   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:20.928806   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:23.402840   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:21.835332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:21.835849   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:21.835884   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:21.835787   78447 retry.go:31] will retry after 2.437330454s: waiting for machine to come up
	I0828 18:22:24.275082   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:24.275523   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:24.275553   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:24.275493   78447 retry.go:31] will retry after 2.288360059s: waiting for machine to come up
	I0828 18:22:26.564963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:26.565404   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:26.565432   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:26.565358   78447 retry.go:31] will retry after 2.911207221s: waiting for machine to come up
	I0828 18:22:25.243485   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:27.744153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:25.249675   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.749573   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.249942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.249956   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.749065   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.249309   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.749697   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.249151   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.749206   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.902220   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:28.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.402648   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:29.479385   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479953   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has current primary IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479975   75908 main.go:141] libmachine: (no-preload-072854) Found IP for machine: 192.168.61.138
	I0828 18:22:29.479988   75908 main.go:141] libmachine: (no-preload-072854) Reserving static IP address...
	I0828 18:22:29.480455   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.480476   75908 main.go:141] libmachine: (no-preload-072854) Reserved static IP address: 192.168.61.138
	I0828 18:22:29.480490   75908 main.go:141] libmachine: (no-preload-072854) DBG | skip adding static IP to network mk-no-preload-072854 - found existing host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"}
	I0828 18:22:29.480500   75908 main.go:141] libmachine: (no-preload-072854) DBG | Getting to WaitForSSH function...
	I0828 18:22:29.480509   75908 main.go:141] libmachine: (no-preload-072854) Waiting for SSH to be available...
	I0828 18:22:29.483163   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483478   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.483509   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483617   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH client type: external
	I0828 18:22:29.483636   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa (-rw-------)
	I0828 18:22:29.483673   75908 main.go:141] libmachine: (no-preload-072854) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:29.483691   75908 main.go:141] libmachine: (no-preload-072854) DBG | About to run SSH command:
	I0828 18:22:29.483705   75908 main.go:141] libmachine: (no-preload-072854) DBG | exit 0
	I0828 18:22:29.606048   75908 main.go:141] libmachine: (no-preload-072854) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:29.606410   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetConfigRaw
	I0828 18:22:29.607071   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.609374   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609733   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.609763   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609984   75908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/config.json ...
	I0828 18:22:29.610223   75908 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:29.610245   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:29.610451   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.612963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613409   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.613431   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.613688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613988   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.614165   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.614339   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.614355   75908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:29.714325   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:29.714360   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714596   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:22:29.714621   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714829   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.717545   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.717914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.717939   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.718102   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.718312   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718513   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718676   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.718848   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.719009   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.719026   75908 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-072854 && echo "no-preload-072854" | sudo tee /etc/hostname
	I0828 18:22:29.835992   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-072854
	
	I0828 18:22:29.836024   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.839134   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839621   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.839654   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839909   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.840128   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840324   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840540   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.840742   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.840973   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.841005   75908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-072854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-072854/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-072854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:29.951089   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:29.951125   75908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:29.951149   75908 buildroot.go:174] setting up certificates
	I0828 18:22:29.951162   75908 provision.go:84] configureAuth start
	I0828 18:22:29.951178   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.951496   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.954309   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954663   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.954694   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.957076   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957345   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.957365   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957550   75908 provision.go:143] copyHostCerts
	I0828 18:22:29.957606   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:29.957624   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:29.957683   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:29.957792   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:29.957807   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:29.957831   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:29.957913   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:29.957924   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:29.957951   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:29.958060   75908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.no-preload-072854 san=[127.0.0.1 192.168.61.138 localhost minikube no-preload-072854]
	I0828 18:22:30.038643   75908 provision.go:177] copyRemoteCerts
	I0828 18:22:30.038705   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:30.038730   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.041574   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.041914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.041946   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.042125   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.042306   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.042460   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.042618   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.124224   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:30.148835   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0828 18:22:30.171599   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:22:30.195349   75908 provision.go:87] duration metric: took 244.171371ms to configureAuth
	I0828 18:22:30.195375   75908 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:30.195580   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:30.195665   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.198535   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.198938   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.198961   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.199171   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.199349   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199490   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199727   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.199917   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.200104   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.200125   75908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:30.422282   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:30.422314   75908 machine.go:96] duration metric: took 812.07707ms to provisionDockerMachine
	I0828 18:22:30.422328   75908 start.go:293] postStartSetup for "no-preload-072854" (driver="kvm2")
	I0828 18:22:30.422341   75908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:30.422361   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.422658   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:30.422688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.425627   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426006   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.426047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426199   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.426401   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.426539   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.426675   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.508399   75908 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:30.512395   75908 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:30.512418   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:30.512505   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:30.512603   75908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:30.512723   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:30.522105   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:30.545166   75908 start.go:296] duration metric: took 122.822966ms for postStartSetup
	I0828 18:22:30.545203   75908 fix.go:56] duration metric: took 18.554447914s for fixHost
	I0828 18:22:30.545221   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.548255   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548658   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.548683   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548867   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.549078   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549251   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549378   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.549555   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.549774   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.549788   75908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:30.650663   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869350.622150588
	
	I0828 18:22:30.650688   75908 fix.go:216] guest clock: 1724869350.622150588
	I0828 18:22:30.650699   75908 fix.go:229] Guest: 2024-08-28 18:22:30.622150588 +0000 UTC Remote: 2024-08-28 18:22:30.545207555 +0000 UTC m=+354.015941485 (delta=76.943033ms)
	I0828 18:22:30.650723   75908 fix.go:200] guest clock delta is within tolerance: 76.943033ms
	I0828 18:22:30.650741   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 18.660017717s
	I0828 18:22:30.650770   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.651011   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:30.653715   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654110   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.654150   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654274   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.654882   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655093   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655173   75908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:30.655235   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.655319   75908 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:30.655339   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.658052   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658097   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658440   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658470   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658507   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658520   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658677   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658804   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658899   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659098   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659131   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659272   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659276   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.659426   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.769716   75908 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:30.775522   75908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:30.918471   75908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:30.924338   75908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:30.924416   75908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:30.939462   75908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:30.939489   75908 start.go:495] detecting cgroup driver to use...
	I0828 18:22:30.939589   75908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:30.956324   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:30.970243   75908 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:30.970319   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:30.983636   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:30.996989   75908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:31.116994   75908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:31.290216   75908 docker.go:233] disabling docker service ...
	I0828 18:22:31.290291   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:31.305578   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:31.318402   75908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:31.446431   75908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:31.570180   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:31.583862   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:31.602513   75908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:22:31.602577   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.613726   75908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:31.613798   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.627405   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.638648   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.648905   75908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:31.660365   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.670925   75908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.689052   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.699345   75908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:31.708691   75908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:31.708753   75908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:31.721500   75908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:31.730798   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:31.858773   75908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:31.945345   75908 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:31.945419   75908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:31.949720   75908 start.go:563] Will wait 60s for crictl version
	I0828 18:22:31.949784   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:31.953193   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:31.990360   75908 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:31.990440   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.019756   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.048117   75908 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:22:29.744207   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.243511   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.249883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:30.749652   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.249973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.249415   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.749545   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.249768   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.749104   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.249819   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.749727   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.901907   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:34.907432   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.049494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:32.052227   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052548   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:32.052585   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052800   75908 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:32.056788   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:32.068700   75908 kubeadm.go:883] updating cluster {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:32.068814   75908 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:22:32.068847   75908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:32.103085   75908 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:22:32.103111   75908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:32.103153   75908 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.103194   75908 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.103240   75908 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.103260   75908 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.103331   75908 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.103379   75908 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.103433   75908 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.103242   75908 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104775   75908 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.104806   75908 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.104829   75908 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.104777   75908 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.104781   75908 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.343173   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0828 18:22:32.343209   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.409616   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.418908   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.447831   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.453065   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.453813   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.494045   75908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0828 18:22:32.494090   75908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0828 18:22:32.494121   75908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.494122   75908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.494157   75908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0828 18:22:32.494168   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494169   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494179   75908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.494209   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546592   75908 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0828 18:22:32.546634   75908 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.546655   75908 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0828 18:22:32.546682   75908 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.546698   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546724   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546807   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.546829   75908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0828 18:22:32.546849   75908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.546880   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.546891   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546910   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.557550   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.593306   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.593328   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.648848   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.648913   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.648922   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.648973   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.704513   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.717712   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.779954   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.780015   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.780080   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.780148   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.814614   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.821580   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0828 18:22:32.821660   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.901464   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0828 18:22:32.901584   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:32.905004   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0828 18:22:32.905036   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0828 18:22:32.905102   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:32.905103   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0828 18:22:32.905144   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0828 18:22:32.905160   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905190   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905105   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:32.905191   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:32.905205   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.907869   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0828 18:22:33.324215   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292175   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.386961854s)
	I0828 18:22:35.292205   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0828 18:22:35.292234   75908 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292245   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.387114296s)
	I0828 18:22:35.292273   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0828 18:22:35.292301   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292314   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.386985678s)
	I0828 18:22:35.292354   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0828 18:22:35.292358   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.387036145s)
	I0828 18:22:35.292367   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.387143897s)
	I0828 18:22:35.292375   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0828 18:22:35.292385   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0828 18:22:35.292409   75908 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.968164241s)
	I0828 18:22:35.292446   75908 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0828 18:22:35.292456   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:35.292479   75908 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292536   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:34.243832   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:36.744323   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:35.249587   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:35.749826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.249647   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.749792   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.249845   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.249577   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.749412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.249047   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.749564   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.402943   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:39.901715   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:37.064442   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.772111922s)
	I0828 18:22:37.064476   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0828 18:22:37.064498   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.064500   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.772021571s)
	I0828 18:22:37.064529   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0828 18:22:37.064536   75908 ssh_runner.go:235] Completed: which crictl: (1.771982077s)
	I0828 18:22:37.064603   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:37.064550   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.121169   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933342   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.868675318s)
	I0828 18:22:38.933379   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0828 18:22:38.933390   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.812184072s)
	I0828 18:22:38.933486   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933400   75908 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.933543   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.983461   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0828 18:22:38.983579   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:39.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:41.243732   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:40.249307   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:40.749120   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.249107   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.749895   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.249941   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.748952   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.249788   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.749898   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.249654   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.749350   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.903470   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:44.403257   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:42.534353   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.550744503s)
	I0828 18:22:42.534392   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0828 18:22:42.534430   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600866705s)
	I0828 18:22:42.534448   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0828 18:22:42.534472   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:42.534521   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:44.602703   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.068154029s)
	I0828 18:22:44.602738   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0828 18:22:44.602765   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:44.602809   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:45.948751   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.345914789s)
	I0828 18:22:45.948794   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0828 18:22:45.948821   75908 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:45.948874   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:43.742979   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.743892   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:47.745070   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.249353   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:45.749091   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.249897   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.748991   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.249385   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.749204   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.248962   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.749853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.249574   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.749028   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.403322   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:48.902485   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:46.594343   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0828 18:22:46.594405   75908 cache_images.go:123] Successfully loaded all cached images
	I0828 18:22:46.594413   75908 cache_images.go:92] duration metric: took 14.491290737s to LoadCachedImages
	I0828 18:22:46.594428   75908 kubeadm.go:934] updating node { 192.168.61.138 8443 v1.31.0 crio true true} ...
	I0828 18:22:46.594562   75908 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-072854 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:46.594627   75908 ssh_runner.go:195] Run: crio config
	I0828 18:22:46.641210   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:46.641230   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:46.641240   75908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:46.641260   75908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-072854 NodeName:no-preload-072854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:22:46.641417   75908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-072854"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:46.641507   75908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:22:46.653042   75908 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:46.653110   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:46.671775   75908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0828 18:22:46.691485   75908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:46.707525   75908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0828 18:22:46.723642   75908 ssh_runner.go:195] Run: grep 192.168.61.138	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:46.727148   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:46.738598   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:46.877354   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:46.896287   75908 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854 for IP: 192.168.61.138
	I0828 18:22:46.896309   75908 certs.go:194] generating shared ca certs ...
	I0828 18:22:46.896324   75908 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:46.896488   75908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:46.896543   75908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:46.896578   75908 certs.go:256] generating profile certs ...
	I0828 18:22:46.896694   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/client.key
	I0828 18:22:46.896777   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key.f9122682
	I0828 18:22:46.896833   75908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key
	I0828 18:22:46.896945   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:46.896975   75908 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:46.896984   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:46.897006   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:46.897028   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:46.897050   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:46.897086   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:46.897777   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:46.940603   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:46.971255   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:47.009269   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:47.043849   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 18:22:47.081562   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 18:22:47.104248   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:47.127680   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 18:22:47.150718   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:47.171449   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:47.192814   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:47.213607   75908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:47.229589   75908 ssh_runner.go:195] Run: openssl version
	I0828 18:22:47.235107   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:47.245976   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250512   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250568   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.256305   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:47.267080   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:47.276961   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281311   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281388   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.286823   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:47.298010   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:47.309303   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313555   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313604   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.319146   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:47.329851   75908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:47.333891   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:47.339544   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:47.344883   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:47.350419   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:47.355560   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:47.360987   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:47.366392   75908 kubeadm.go:392] StartCluster: {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:47.366472   75908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:47.366518   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.407218   75908 cri.go:89] found id: ""
	I0828 18:22:47.407283   75908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:47.418518   75908 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:47.418541   75908 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:47.418599   75908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:47.429592   75908 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:47.430649   75908 kubeconfig.go:125] found "no-preload-072854" server: "https://192.168.61.138:8443"
	I0828 18:22:47.432727   75908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:47.443042   75908 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.138
	I0828 18:22:47.443072   75908 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:47.443084   75908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:47.443132   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.483840   75908 cri.go:89] found id: ""
	I0828 18:22:47.483906   75908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:47.499558   75908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:47.508932   75908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:47.508954   75908 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:47.508998   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:47.519003   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:47.519082   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:47.528248   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:47.536682   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:47.536744   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:47.545411   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.553945   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:47.554005   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.562837   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:47.571080   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:47.571141   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:47.579788   75908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:47.590221   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:47.707814   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.459935   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.669459   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.772934   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.886910   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:48.887010   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.387963   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.887167   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.923097   75908 api_server.go:72] duration metric: took 1.036200671s to wait for apiserver process to appear ...
	I0828 18:22:49.923147   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:49.923182   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:50.244153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.245033   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.835389   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:52.835424   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:52.835439   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.938497   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.938528   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:52.938541   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.943233   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.943256   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.423531   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.428654   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.428675   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.924251   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.963729   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.963759   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:54.423241   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:54.430345   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:22:54.436835   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:54.436858   75908 api_server.go:131] duration metric: took 4.513702157s to wait for apiserver health ...
	I0828 18:22:54.436867   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:54.436873   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:54.438482   75908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:50.249726   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:50.749045   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.249609   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.749060   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.249827   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.748985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.248958   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.748960   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.249581   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.749175   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.404355   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:53.904030   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:54.439656   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:54.453060   75908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:54.473537   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:54.489302   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:54.489340   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:54.489352   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:54.489369   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:54.489380   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:54.489392   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:54.489404   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:54.489414   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:54.489425   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:54.489434   75908 system_pods.go:74] duration metric: took 15.875803ms to wait for pod list to return data ...
	I0828 18:22:54.489446   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:54.494398   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:54.494428   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:54.494441   75908 node_conditions.go:105] duration metric: took 4.987547ms to run NodePressure ...
	I0828 18:22:54.494462   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:54.766427   75908 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771542   75908 kubeadm.go:739] kubelet initialised
	I0828 18:22:54.771571   75908 kubeadm.go:740] duration metric: took 5.116897ms waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771582   75908 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:54.777783   75908 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.787163   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787193   75908 pod_ready.go:82] duration metric: took 9.382038ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.787205   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787215   75908 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.791786   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791810   75908 pod_ready.go:82] duration metric: took 4.586002ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.791818   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791826   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.796201   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796220   75908 pod_ready.go:82] duration metric: took 4.388906ms for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.796228   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796234   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.877071   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877104   75908 pod_ready.go:82] duration metric: took 80.86176ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.877118   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877127   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.277179   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277206   75908 pod_ready.go:82] duration metric: took 400.069901ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.277215   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277223   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.676857   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676887   75908 pod_ready.go:82] duration metric: took 399.658558ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.676898   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676904   75908 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:56.077491   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077525   75908 pod_ready.go:82] duration metric: took 400.610612ms for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:56.077535   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077543   75908 pod_ready.go:39] duration metric: took 1.305948645s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:56.077559   75908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:56.090851   75908 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:56.090878   75908 kubeadm.go:597] duration metric: took 8.672328864s to restartPrimaryControlPlane
	I0828 18:22:56.090889   75908 kubeadm.go:394] duration metric: took 8.724501209s to StartCluster
	I0828 18:22:56.090909   75908 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.090980   75908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:56.092859   75908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.093177   75908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:56.093304   75908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:56.093391   75908 addons.go:69] Setting storage-provisioner=true in profile "no-preload-072854"
	I0828 18:22:56.093386   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:56.093415   75908 addons.go:69] Setting default-storageclass=true in profile "no-preload-072854"
	I0828 18:22:56.093472   75908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-072854"
	I0828 18:22:56.093457   75908 addons.go:69] Setting metrics-server=true in profile "no-preload-072854"
	I0828 18:22:56.093501   75908 addons.go:234] Setting addon metrics-server=true in "no-preload-072854"
	I0828 18:22:56.093429   75908 addons.go:234] Setting addon storage-provisioner=true in "no-preload-072854"
	W0828 18:22:56.093516   75908 addons.go:243] addon metrics-server should already be in state true
	W0828 18:22:56.093518   75908 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093869   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093904   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093994   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.094069   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.094796   75908 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:56.096268   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:56.110476   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0828 18:22:56.110685   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0828 18:22:56.110791   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0828 18:22:56.111030   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111183   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111453   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111592   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111603   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111710   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111720   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111820   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111839   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111892   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112043   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112214   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112402   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.112440   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112474   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.112669   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112711   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.115984   75908 addons.go:234] Setting addon default-storageclass=true in "no-preload-072854"
	W0828 18:22:56.116000   75908 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:56.116020   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.116245   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.116280   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.127848   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35747
	I0828 18:22:56.134902   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.135863   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.135892   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.136351   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.136536   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.138800   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.140837   75908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:56.142271   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:56.142290   75908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:56.142311   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.145770   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146271   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.146332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146572   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.146787   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.146958   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.147097   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.158402   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I0828 18:22:56.158948   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.159531   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.159555   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.159622   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0828 18:22:56.160033   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.160108   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.160578   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.160608   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.160864   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.160876   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.161318   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.161543   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.163449   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.165347   75908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:56.166532   75908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.166547   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:56.166564   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.170058   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170510   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.170536   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170718   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.170900   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.171055   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.171193   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.177056   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I0828 18:22:56.177458   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.177969   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.178001   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.178335   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.178537   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.180056   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.180261   75908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.180274   75908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:56.180288   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.182971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183550   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.183576   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183726   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.183879   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.184042   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.184212   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.333329   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:56.363605   75908 node_ready.go:35] waiting up to 6m0s for node "no-preload-072854" to be "Ready" ...
	I0828 18:22:56.444569   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:56.444591   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:56.466266   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:56.466288   75908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:56.472695   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.494468   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:56.494496   75908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:56.499713   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.549699   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:57.391629   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391655   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.391634   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391724   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392046   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392063   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392072   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392068   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392080   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392108   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392046   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392127   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392144   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392152   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392322   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392336   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.393780   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.393802   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.393846   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.397916   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.397937   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.398164   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.398183   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.398202   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520056   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520082   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520358   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520373   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520392   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520435   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520458   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520699   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520714   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520725   75908 addons.go:475] Verifying addon metrics-server=true in "no-preload-072854"
	I0828 18:22:57.522537   75908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:54.742708   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:56.744595   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:55.248933   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:55.749502   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.249976   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.749648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.249544   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.749769   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.249492   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.749787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.249693   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.749781   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.402039   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:58.901738   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:57.523745   75908 addons.go:510] duration metric: took 1.430442724s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:58.367342   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:00.867911   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:59.243496   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:01.244209   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:00.249249   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.749724   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.248973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.748932   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.249474   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.749966   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.249404   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.248943   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.749828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.902675   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:03.402001   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:02.868286   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:03.367260   75908 node_ready.go:49] node "no-preload-072854" has status "Ready":"True"
	I0828 18:23:03.367286   75908 node_ready.go:38] duration metric: took 7.003649083s for node "no-preload-072854" to be "Ready" ...
	I0828 18:23:03.367296   75908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:23:03.372211   75908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376919   75908 pod_ready.go:93] pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.376944   75908 pod_ready.go:82] duration metric: took 4.710919ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376954   75908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381043   75908 pod_ready.go:93] pod "etcd-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.381066   75908 pod_ready.go:82] duration metric: took 4.10571ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381078   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:05.388413   75908 pod_ready.go:103] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.387040   75908 pod_ready.go:93] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.387060   75908 pod_ready.go:82] duration metric: took 3.005974723s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.387070   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391257   75908 pod_ready.go:93] pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.391276   75908 pod_ready.go:82] duration metric: took 4.19923ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391285   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396819   75908 pod_ready.go:93] pod "kube-proxy-tfxfd" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.396836   75908 pod_ready.go:82] duration metric: took 5.545346ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396845   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.743752   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.242657   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.243781   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:05.249882   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.749888   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.249648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.749518   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.249032   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.249738   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.749748   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.249670   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.749246   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.906344   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.401488   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.402915   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.568922   75908 pod_ready.go:93] pod "kube-scheduler-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.568948   75908 pod_ready.go:82] duration metric: took 172.096644ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.568964   75908 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:08.574813   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.576583   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.743641   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.243152   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.249340   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:10.749798   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.249721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.249779   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.249760   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.749029   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.249441   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.749641   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.903188   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.401514   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.076559   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.575593   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.742772   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.743273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.249678   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:15.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.249786   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.748968   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.249139   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.749721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.249749   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.749731   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.249576   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.749644   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.402418   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.902446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.575692   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.576073   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.744432   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.243417   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:20.249682   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:20.748965   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.249378   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.749011   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:21.749077   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:21.783557   77396 cri.go:89] found id: ""
	I0828 18:23:21.783581   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.783592   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:21.783600   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:21.783667   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:21.816332   77396 cri.go:89] found id: ""
	I0828 18:23:21.816366   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.816377   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:21.816385   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:21.816451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:21.850130   77396 cri.go:89] found id: ""
	I0828 18:23:21.850157   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.850168   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:21.850175   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:21.850240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:21.887000   77396 cri.go:89] found id: ""
	I0828 18:23:21.887028   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.887037   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:21.887045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:21.887106   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:21.922052   77396 cri.go:89] found id: ""
	I0828 18:23:21.922095   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.922106   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:21.922114   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:21.922169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:21.968838   77396 cri.go:89] found id: ""
	I0828 18:23:21.968865   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.968872   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:21.968879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:21.968937   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:22.005361   77396 cri.go:89] found id: ""
	I0828 18:23:22.005387   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.005397   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:22.005404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:22.005465   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:22.043999   77396 cri.go:89] found id: ""
	I0828 18:23:22.044026   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.044034   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:22.044042   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:22.044054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:22.092612   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:22.092641   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:22.105847   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:22.105870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:22.230236   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:22.230254   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:22.230267   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:22.305648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:22.305712   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:24.843524   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:24.856321   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:24.856412   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:24.891356   77396 cri.go:89] found id: ""
	I0828 18:23:24.891395   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.891406   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:24.891414   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:24.891476   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:24.923476   77396 cri.go:89] found id: ""
	I0828 18:23:24.923504   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.923515   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:24.923522   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:24.923583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:24.955453   77396 cri.go:89] found id: ""
	I0828 18:23:24.955482   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.955493   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:24.955499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:24.955564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:24.991349   77396 cri.go:89] found id: ""
	I0828 18:23:24.991377   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.991384   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:24.991394   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:24.991448   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:25.026464   77396 cri.go:89] found id: ""
	I0828 18:23:25.026493   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.026501   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:25.026508   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:25.026559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:25.066989   77396 cri.go:89] found id: ""
	I0828 18:23:25.067021   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.067045   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:25.067053   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:25.067123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:25.111327   77396 cri.go:89] found id: ""
	I0828 18:23:25.111358   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.111369   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:25.111377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:25.111442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:25.159672   77396 cri.go:89] found id: ""
	I0828 18:23:25.159698   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.159707   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:25.159715   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:25.159726   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:21.902745   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.075480   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.575344   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.743311   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.743442   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:25.216755   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:25.216788   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:25.230365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:25.230399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:25.303227   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:25.303253   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:25.303276   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:25.378467   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:25.378501   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:27.915420   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:27.927659   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:27.927726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:27.961535   77396 cri.go:89] found id: ""
	I0828 18:23:27.961560   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.961568   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:27.961573   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:27.961618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:27.993707   77396 cri.go:89] found id: ""
	I0828 18:23:27.993732   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.993739   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:27.993745   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:27.993792   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:28.027410   77396 cri.go:89] found id: ""
	I0828 18:23:28.027438   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.027445   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:28.027451   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:28.027509   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:28.063874   77396 cri.go:89] found id: ""
	I0828 18:23:28.063909   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.063918   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:28.063924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:28.063974   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:28.096726   77396 cri.go:89] found id: ""
	I0828 18:23:28.096755   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.096763   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:28.096769   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:28.096826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:28.129538   77396 cri.go:89] found id: ""
	I0828 18:23:28.129562   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.129570   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:28.129576   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:28.129633   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:28.167785   77396 cri.go:89] found id: ""
	I0828 18:23:28.167813   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.167821   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:28.167827   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:28.167881   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:28.200417   77396 cri.go:89] found id: ""
	I0828 18:23:28.200445   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.200456   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:28.200467   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:28.200481   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:28.214025   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:28.214054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:28.280106   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:28.280126   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:28.280139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:28.359834   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:28.359875   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:28.399997   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:28.400028   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:26.902287   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.403446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.576035   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.075134   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.080674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:28.744552   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.243825   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:30.950870   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:30.967367   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:30.967426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:31.007843   77396 cri.go:89] found id: ""
	I0828 18:23:31.007873   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.007882   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:31.007890   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:31.007949   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:31.056710   77396 cri.go:89] found id: ""
	I0828 18:23:31.056744   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.056756   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:31.056764   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:31.056824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:31.101177   77396 cri.go:89] found id: ""
	I0828 18:23:31.101208   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.101218   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:31.101225   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:31.101283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:31.135513   77396 cri.go:89] found id: ""
	I0828 18:23:31.135548   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.135560   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:31.135568   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:31.135635   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:31.172887   77396 cri.go:89] found id: ""
	I0828 18:23:31.172921   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.172932   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:31.172939   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:31.173006   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:31.207744   77396 cri.go:89] found id: ""
	I0828 18:23:31.207775   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.207788   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:31.207795   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:31.207873   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:31.242954   77396 cri.go:89] found id: ""
	I0828 18:23:31.242984   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.242995   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:31.243003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:31.243063   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:31.277382   77396 cri.go:89] found id: ""
	I0828 18:23:31.277418   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.277427   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:31.277436   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:31.277448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.315688   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:31.315722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:31.367565   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:31.367596   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:31.380803   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:31.380839   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:31.447184   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:31.447214   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:31.447229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.022521   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:34.036551   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:34.036615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:34.074735   77396 cri.go:89] found id: ""
	I0828 18:23:34.074763   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.074772   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:34.074780   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:34.074836   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:34.113604   77396 cri.go:89] found id: ""
	I0828 18:23:34.113631   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.113642   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:34.113649   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:34.113711   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:34.152658   77396 cri.go:89] found id: ""
	I0828 18:23:34.152687   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.152701   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:34.152707   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:34.152753   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:34.188748   77396 cri.go:89] found id: ""
	I0828 18:23:34.188775   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.188784   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:34.188789   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:34.188847   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:34.221553   77396 cri.go:89] found id: ""
	I0828 18:23:34.221584   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.221595   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:34.221602   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:34.221666   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:34.257809   77396 cri.go:89] found id: ""
	I0828 18:23:34.257833   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.257843   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:34.257850   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:34.257935   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:34.291217   77396 cri.go:89] found id: ""
	I0828 18:23:34.291246   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.291253   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:34.291261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:34.291327   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:34.324084   77396 cri.go:89] found id: ""
	I0828 18:23:34.324114   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.324122   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:34.324133   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:34.324147   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:34.373802   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:34.373838   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:34.386779   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:34.386807   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:34.457396   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:34.457413   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:34.457428   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.531549   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:34.531590   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.901633   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:34.402475   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.576038   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:36.075226   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:35.743297   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.744669   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.068985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:37.083317   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:37.083383   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:37.117109   77396 cri.go:89] found id: ""
	I0828 18:23:37.117144   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.117156   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:37.117164   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:37.117225   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:37.150151   77396 cri.go:89] found id: ""
	I0828 18:23:37.150180   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.150189   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:37.150194   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:37.150249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:37.184263   77396 cri.go:89] found id: ""
	I0828 18:23:37.184289   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.184298   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:37.184303   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:37.184358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:37.214442   77396 cri.go:89] found id: ""
	I0828 18:23:37.214468   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.214476   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:37.214481   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:37.214545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:37.251690   77396 cri.go:89] found id: ""
	I0828 18:23:37.251723   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.251732   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:37.251738   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:37.251790   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:37.286900   77396 cri.go:89] found id: ""
	I0828 18:23:37.286929   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.286939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:37.286946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:37.287026   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:37.324010   77396 cri.go:89] found id: ""
	I0828 18:23:37.324039   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.324049   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:37.324057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:37.324114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:37.359723   77396 cri.go:89] found id: ""
	I0828 18:23:37.359777   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.359785   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:37.359813   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:37.359829   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:37.411363   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:37.411395   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:37.425078   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:37.425108   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:37.498351   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:37.498374   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:37.498399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:37.580149   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:37.580187   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:40.119822   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:40.134555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:40.134613   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:40.173129   77396 cri.go:89] found id: ""
	I0828 18:23:40.173156   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.173164   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:40.173170   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:40.173218   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:36.902004   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:39.401256   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:38.575639   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.575835   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.243909   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.743492   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.205445   77396 cri.go:89] found id: ""
	I0828 18:23:40.205470   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.205477   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:40.205482   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:40.205536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:40.237018   77396 cri.go:89] found id: ""
	I0828 18:23:40.237046   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.237057   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:40.237064   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:40.237124   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:40.271188   77396 cri.go:89] found id: ""
	I0828 18:23:40.271220   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.271232   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:40.271239   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:40.271302   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:40.304532   77396 cri.go:89] found id: ""
	I0828 18:23:40.304566   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.304577   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:40.304585   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:40.304652   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:40.338114   77396 cri.go:89] found id: ""
	I0828 18:23:40.338145   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.338156   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:40.338165   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:40.338227   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:40.370126   77396 cri.go:89] found id: ""
	I0828 18:23:40.370160   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.370176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:40.370184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:40.370247   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:40.406139   77396 cri.go:89] found id: ""
	I0828 18:23:40.406167   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.406176   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:40.406186   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:40.406201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:40.459364   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:40.459404   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:40.472467   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:40.472496   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:40.546389   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:40.546420   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:40.546438   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:40.628550   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:40.628586   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:43.170210   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:43.183441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:43.183516   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:43.215798   77396 cri.go:89] found id: ""
	I0828 18:23:43.215823   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.215834   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:43.215841   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:43.215905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:43.250001   77396 cri.go:89] found id: ""
	I0828 18:23:43.250027   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.250035   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:43.250041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:43.250110   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:43.284621   77396 cri.go:89] found id: ""
	I0828 18:23:43.284654   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.284662   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:43.284668   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:43.284716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:43.318780   77396 cri.go:89] found id: ""
	I0828 18:23:43.318805   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.318815   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:43.318821   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:43.318866   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:43.351788   77396 cri.go:89] found id: ""
	I0828 18:23:43.351810   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.351818   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:43.351823   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:43.351872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:43.388719   77396 cri.go:89] found id: ""
	I0828 18:23:43.388745   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.388755   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:43.388761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:43.388810   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:43.423250   77396 cri.go:89] found id: ""
	I0828 18:23:43.423273   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.423283   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:43.423290   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:43.423376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:43.464644   77396 cri.go:89] found id: ""
	I0828 18:23:43.464672   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.464683   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:43.464693   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:43.464708   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:43.517422   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:43.517457   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:43.530317   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:43.530342   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:43.599776   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:43.599795   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:43.599806   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:43.679377   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:43.679409   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:41.401619   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:43.403142   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.576264   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.076333   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.242626   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.243310   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:46.215985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:46.229564   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:46.229632   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:46.267425   77396 cri.go:89] found id: ""
	I0828 18:23:46.267453   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.267464   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:46.267472   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:46.267534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:46.302532   77396 cri.go:89] found id: ""
	I0828 18:23:46.302562   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.302573   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:46.302580   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:46.302645   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:46.338197   77396 cri.go:89] found id: ""
	I0828 18:23:46.338226   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.338237   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:46.338244   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:46.338305   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:46.371503   77396 cri.go:89] found id: ""
	I0828 18:23:46.371528   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.371535   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:46.371542   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:46.371606   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:46.406364   77396 cri.go:89] found id: ""
	I0828 18:23:46.406386   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.406399   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:46.406405   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:46.406451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:46.441519   77396 cri.go:89] found id: ""
	I0828 18:23:46.441547   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.441557   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:46.441565   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:46.441626   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:46.475413   77396 cri.go:89] found id: ""
	I0828 18:23:46.475445   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.475455   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:46.475465   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:46.475531   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:46.508722   77396 cri.go:89] found id: ""
	I0828 18:23:46.508752   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.508762   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:46.508772   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:46.508790   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:46.564737   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:46.564776   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:46.578833   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:46.578860   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:46.649533   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:46.649554   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:46.649566   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:46.725738   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:46.725780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.263052   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:49.275342   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:49.275403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:49.310092   77396 cri.go:89] found id: ""
	I0828 18:23:49.310121   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.310131   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:49.310138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:49.310200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:49.347624   77396 cri.go:89] found id: ""
	I0828 18:23:49.347649   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.347657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:49.347662   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:49.347708   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:49.383801   77396 cri.go:89] found id: ""
	I0828 18:23:49.383827   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.383834   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:49.383840   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:49.383889   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:49.420443   77396 cri.go:89] found id: ""
	I0828 18:23:49.420470   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.420478   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:49.420484   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:49.420536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:49.452225   77396 cri.go:89] found id: ""
	I0828 18:23:49.452247   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.452255   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:49.452260   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:49.452306   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:49.486137   77396 cri.go:89] found id: ""
	I0828 18:23:49.486164   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.486172   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:49.486178   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:49.486224   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:49.519081   77396 cri.go:89] found id: ""
	I0828 18:23:49.519115   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.519126   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:49.519137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:49.519199   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:49.552903   77396 cri.go:89] found id: ""
	I0828 18:23:49.552932   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.552940   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:49.552948   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:49.552962   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:49.623963   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:49.624000   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:49.624023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:49.700684   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:49.700722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.738241   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:49.738265   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:49.786941   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:49.786976   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:45.901814   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.903106   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.905017   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.575690   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.576689   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.243535   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:51.243843   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:53.244097   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.300380   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:52.314281   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:52.314347   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:52.348497   77396 cri.go:89] found id: ""
	I0828 18:23:52.348522   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.348532   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:52.348539   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:52.348605   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:52.382060   77396 cri.go:89] found id: ""
	I0828 18:23:52.382107   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.382119   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:52.382127   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:52.382242   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:52.414306   77396 cri.go:89] found id: ""
	I0828 18:23:52.414335   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.414348   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:52.414356   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:52.414424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:52.448965   77396 cri.go:89] found id: ""
	I0828 18:23:52.448995   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.449005   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:52.449012   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:52.449079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:52.479102   77396 cri.go:89] found id: ""
	I0828 18:23:52.479129   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.479140   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:52.479148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:52.479213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:52.510025   77396 cri.go:89] found id: ""
	I0828 18:23:52.510051   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.510061   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:52.510068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:52.510171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:52.544472   77396 cri.go:89] found id: ""
	I0828 18:23:52.544501   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.544510   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:52.544517   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:52.544584   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:52.579962   77396 cri.go:89] found id: ""
	I0828 18:23:52.579986   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.579993   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:52.580000   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:52.580015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:52.631775   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:52.631809   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:52.645200   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:52.645230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:52.709318   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:52.709341   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:52.709355   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:52.788797   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:52.788834   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:52.402059   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.901750   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.075625   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.076533   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.743325   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.242726   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.324787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:55.338003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:55.338109   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:55.371733   77396 cri.go:89] found id: ""
	I0828 18:23:55.371757   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.371764   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:55.371770   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:55.371818   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:55.407922   77396 cri.go:89] found id: ""
	I0828 18:23:55.407944   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.407951   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:55.407957   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:55.408009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:55.443667   77396 cri.go:89] found id: ""
	I0828 18:23:55.443693   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.443700   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:55.443706   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:55.443761   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:55.478692   77396 cri.go:89] found id: ""
	I0828 18:23:55.478725   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.478735   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:55.478742   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:55.478804   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:55.512495   77396 cri.go:89] found id: ""
	I0828 18:23:55.512517   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.512525   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:55.512530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:55.512583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:55.546363   77396 cri.go:89] found id: ""
	I0828 18:23:55.546404   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.546415   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:55.546423   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:55.546478   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:55.579505   77396 cri.go:89] found id: ""
	I0828 18:23:55.579526   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.579533   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:55.579539   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:55.579588   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:55.610588   77396 cri.go:89] found id: ""
	I0828 18:23:55.610612   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.610628   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:55.610648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:55.610659   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:55.647289   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:55.647313   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:55.696660   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:55.696699   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:55.709215   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:55.709242   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:55.781755   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:55.781773   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:55.781786   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.359553   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:58.371960   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:58.372034   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:58.404455   77396 cri.go:89] found id: ""
	I0828 18:23:58.404481   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.404488   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:58.404494   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:58.404545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:58.436955   77396 cri.go:89] found id: ""
	I0828 18:23:58.436979   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.436989   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:58.436996   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:58.437055   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:58.467985   77396 cri.go:89] found id: ""
	I0828 18:23:58.468011   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.468021   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:58.468028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:58.468085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:58.500356   77396 cri.go:89] found id: ""
	I0828 18:23:58.500390   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.500398   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:58.500404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:58.500469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:58.538445   77396 cri.go:89] found id: ""
	I0828 18:23:58.538469   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.538477   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:58.538483   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:58.538541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:58.577827   77396 cri.go:89] found id: ""
	I0828 18:23:58.577851   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.577859   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:58.577867   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:58.577932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:58.611863   77396 cri.go:89] found id: ""
	I0828 18:23:58.611891   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.611902   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:58.611909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:58.611973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:58.646133   77396 cri.go:89] found id: ""
	I0828 18:23:58.646165   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.646175   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:58.646187   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:58.646204   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:58.659103   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:58.659134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:58.725271   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:58.725292   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:58.725310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.807171   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:58.807218   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:58.848245   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:58.848273   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:56.902329   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.902824   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:56.575727   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.576160   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.075851   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:00.243273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:02.247987   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.402171   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:01.415498   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:01.415574   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:01.449314   77396 cri.go:89] found id: ""
	I0828 18:24:01.449347   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.449355   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:01.449362   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:01.449425   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:01.485354   77396 cri.go:89] found id: ""
	I0828 18:24:01.485381   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.485388   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:01.485395   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:01.485439   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:01.518106   77396 cri.go:89] found id: ""
	I0828 18:24:01.518132   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.518139   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:01.518145   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:01.518191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:01.551298   77396 cri.go:89] found id: ""
	I0828 18:24:01.551329   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.551340   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:01.551348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:01.551406   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:01.587074   77396 cri.go:89] found id: ""
	I0828 18:24:01.587100   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.587107   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:01.587112   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:01.587158   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:01.619482   77396 cri.go:89] found id: ""
	I0828 18:24:01.619510   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.619518   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:01.619523   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:01.619575   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:01.651938   77396 cri.go:89] found id: ""
	I0828 18:24:01.651965   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.651972   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:01.651978   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:01.652039   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:01.685390   77396 cri.go:89] found id: ""
	I0828 18:24:01.685419   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.685429   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:01.685437   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:01.685448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.723631   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:01.723656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:01.777387   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:01.777422   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:01.793748   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:01.793781   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:01.857869   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:01.857901   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:01.857915   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.434883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:04.447876   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:04.447953   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:04.480730   77396 cri.go:89] found id: ""
	I0828 18:24:04.480762   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.480774   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:04.480781   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:04.480841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:04.514621   77396 cri.go:89] found id: ""
	I0828 18:24:04.514647   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.514657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:04.514664   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:04.514722   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:04.552044   77396 cri.go:89] found id: ""
	I0828 18:24:04.552071   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.552083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:04.552090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:04.552151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:04.587402   77396 cri.go:89] found id: ""
	I0828 18:24:04.587427   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.587440   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:04.587446   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:04.587506   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:04.619299   77396 cri.go:89] found id: ""
	I0828 18:24:04.619329   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.619337   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:04.619343   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:04.619393   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:04.659363   77396 cri.go:89] found id: ""
	I0828 18:24:04.659391   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.659399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:04.659408   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:04.659469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:04.691997   77396 cri.go:89] found id: ""
	I0828 18:24:04.692022   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.692030   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:04.692035   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:04.692089   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:04.725162   77396 cri.go:89] found id: ""
	I0828 18:24:04.725188   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.725196   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:04.725204   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:04.725215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:04.778072   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:04.778112   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:04.792571   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:04.792604   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:04.863074   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:04.863096   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:04.863107   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.958480   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:04.958516   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.401445   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.402916   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.575667   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:05.576444   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:04.744216   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.243680   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.498048   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:07.511286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:07.511350   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:07.554880   77396 cri.go:89] found id: ""
	I0828 18:24:07.554910   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.554921   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:07.554929   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:07.554990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:07.590593   77396 cri.go:89] found id: ""
	I0828 18:24:07.590621   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.590631   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:07.590641   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:07.590706   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:07.624067   77396 cri.go:89] found id: ""
	I0828 18:24:07.624096   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.624107   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:07.624113   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:07.624169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:07.657241   77396 cri.go:89] found id: ""
	I0828 18:24:07.657269   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.657277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:07.657282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:07.657341   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:07.702308   77396 cri.go:89] found id: ""
	I0828 18:24:07.702358   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.702368   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:07.702375   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:07.702438   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:07.736409   77396 cri.go:89] found id: ""
	I0828 18:24:07.736446   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.736454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:07.736459   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:07.736527   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:07.771001   77396 cri.go:89] found id: ""
	I0828 18:24:07.771029   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.771037   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:07.771043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:07.771090   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:07.807061   77396 cri.go:89] found id: ""
	I0828 18:24:07.807089   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.807099   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:07.807111   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:07.807125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:07.885254   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:07.885293   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:07.926920   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:07.926948   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:07.980485   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:07.980524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:07.994512   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:07.994545   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:08.071058   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:05.901817   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.902547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.402041   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.576656   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.077246   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:09.244155   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:11.743283   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.571233   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:10.586227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:10.586298   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:10.623971   77396 cri.go:89] found id: ""
	I0828 18:24:10.623997   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.624006   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:10.624014   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:10.624074   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:10.675472   77396 cri.go:89] found id: ""
	I0828 18:24:10.675506   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.675518   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:10.675526   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:10.675599   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:10.707885   77396 cri.go:89] found id: ""
	I0828 18:24:10.707913   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.707922   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:10.707931   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:10.707991   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:10.740896   77396 cri.go:89] found id: ""
	I0828 18:24:10.740924   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.740934   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:10.740942   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:10.741058   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:10.776125   77396 cri.go:89] found id: ""
	I0828 18:24:10.776155   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.776167   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:10.776174   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:10.776234   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:10.814024   77396 cri.go:89] found id: ""
	I0828 18:24:10.814053   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.814062   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:10.814068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:10.814132   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:10.851380   77396 cri.go:89] found id: ""
	I0828 18:24:10.851404   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.851412   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:10.851418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:10.851479   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:10.888162   77396 cri.go:89] found id: ""
	I0828 18:24:10.888193   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.888204   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:10.888215   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:10.888229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:10.938481   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:10.938520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:10.952841   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:10.952870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:11.020956   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:11.020982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:11.020997   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:11.101883   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:11.101920   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:13.642878   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:13.657098   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:13.657172   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:13.695651   77396 cri.go:89] found id: ""
	I0828 18:24:13.695686   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.695694   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:13.695699   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:13.695747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:13.732419   77396 cri.go:89] found id: ""
	I0828 18:24:13.732452   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.732465   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:13.732473   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:13.732523   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:13.770052   77396 cri.go:89] found id: ""
	I0828 18:24:13.770090   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.770099   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:13.770104   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:13.770157   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:13.807955   77396 cri.go:89] found id: ""
	I0828 18:24:13.807980   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.807988   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:13.807993   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:13.808045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:13.849535   77396 cri.go:89] found id: ""
	I0828 18:24:13.849559   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.849566   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:13.849571   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:13.849621   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:13.889078   77396 cri.go:89] found id: ""
	I0828 18:24:13.889105   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.889114   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:13.889122   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:13.889177   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:13.924998   77396 cri.go:89] found id: ""
	I0828 18:24:13.925030   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.925040   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:13.925046   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:13.925095   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:13.962794   77396 cri.go:89] found id: ""
	I0828 18:24:13.962824   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.962835   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:13.962843   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:13.962854   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:14.016213   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:14.016260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:14.030089   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:14.030119   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:14.101102   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:14.101121   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:14.101134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:14.179243   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:14.179283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:12.903671   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:15.401472   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:12.575572   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:14.575994   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:13.743881   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.243453   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.725412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:16.738387   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:16.738459   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:16.773934   77396 cri.go:89] found id: ""
	I0828 18:24:16.773960   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.773967   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:16.773973   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:16.774022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:16.807374   77396 cri.go:89] found id: ""
	I0828 18:24:16.807402   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.807412   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:16.807418   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:16.807468   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:16.841569   77396 cri.go:89] found id: ""
	I0828 18:24:16.841595   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.841605   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:16.841613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:16.841673   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:16.877225   77396 cri.go:89] found id: ""
	I0828 18:24:16.877247   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.877255   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:16.877261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:16.877321   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:16.911357   77396 cri.go:89] found id: ""
	I0828 18:24:16.911385   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.911395   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:16.911402   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:16.911458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:16.955061   77396 cri.go:89] found id: ""
	I0828 18:24:16.955087   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.955095   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:16.955103   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:16.955156   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:16.989851   77396 cri.go:89] found id: ""
	I0828 18:24:16.989887   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.989900   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:16.989906   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:16.989966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:17.023974   77396 cri.go:89] found id: ""
	I0828 18:24:17.024005   77396 logs.go:276] 0 containers: []
	W0828 18:24:17.024016   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:17.024024   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:17.024036   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:17.085245   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:17.085279   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:17.100181   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:17.100211   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:17.185406   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:17.185426   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:17.185437   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:17.266980   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:17.267020   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:19.808568   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:19.823365   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:19.823432   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:19.859428   77396 cri.go:89] found id: ""
	I0828 18:24:19.859451   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.859459   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:19.859464   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:19.859518   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:19.895152   77396 cri.go:89] found id: ""
	I0828 18:24:19.895176   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.895186   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:19.895202   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:19.895263   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:19.935775   77396 cri.go:89] found id: ""
	I0828 18:24:19.935806   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.935815   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:19.935828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:19.935893   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:19.969484   77396 cri.go:89] found id: ""
	I0828 18:24:19.969518   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.969528   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:19.969534   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:19.969615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:20.002893   77396 cri.go:89] found id: ""
	I0828 18:24:20.002935   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.002947   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:20.002955   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:20.003041   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:20.034641   77396 cri.go:89] found id: ""
	I0828 18:24:20.034668   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.034678   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:20.034686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:20.034750   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:20.064580   77396 cri.go:89] found id: ""
	I0828 18:24:20.064609   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.064620   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:20.064627   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:20.064710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:20.109306   77396 cri.go:89] found id: ""
	I0828 18:24:20.109348   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.109360   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:20.109371   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:20.109390   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:20.160179   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:20.160213   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:20.172953   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:20.172982   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:24:17.402222   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.402389   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:17.076219   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.575317   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:18.742920   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:21.243791   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:24:20.245855   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:20.245879   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:20.245894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:20.333372   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:20.333430   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:22.870985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:22.886333   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:22.886403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:22.923248   77396 cri.go:89] found id: ""
	I0828 18:24:22.923278   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.923290   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:22.923298   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:22.923362   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:22.961720   77396 cri.go:89] found id: ""
	I0828 18:24:22.961747   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.961758   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:22.961767   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:22.961826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:22.996416   77396 cri.go:89] found id: ""
	I0828 18:24:22.996451   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.996461   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:22.996469   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:22.996534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:23.031328   77396 cri.go:89] found id: ""
	I0828 18:24:23.031354   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.031365   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:23.031373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:23.031442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:23.062790   77396 cri.go:89] found id: ""
	I0828 18:24:23.062818   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.062828   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:23.062836   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:23.062900   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:23.095783   77396 cri.go:89] found id: ""
	I0828 18:24:23.095811   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.095822   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:23.095829   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:23.095887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:23.128950   77396 cri.go:89] found id: ""
	I0828 18:24:23.128976   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.128984   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:23.128989   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:23.129035   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:23.161040   77396 cri.go:89] found id: ""
	I0828 18:24:23.161070   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.161081   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:23.161093   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:23.161109   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:23.209200   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:23.209232   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:23.222326   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:23.222369   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:23.294157   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:23.294223   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:23.294235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:23.371364   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:23.371399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:21.902165   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.902593   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:22.075187   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:24.076034   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.743186   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.245507   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.248023   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:25.911853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:25.924909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:25.925042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:25.958257   77396 cri.go:89] found id: ""
	I0828 18:24:25.958286   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.958294   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:25.958300   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:25.958380   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:25.991284   77396 cri.go:89] found id: ""
	I0828 18:24:25.991312   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.991320   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:25.991325   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:25.991373   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:26.023932   77396 cri.go:89] found id: ""
	I0828 18:24:26.023963   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.023974   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:26.023981   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:26.024042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:26.055233   77396 cri.go:89] found id: ""
	I0828 18:24:26.055264   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.055274   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:26.055282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:26.055342   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:26.091307   77396 cri.go:89] found id: ""
	I0828 18:24:26.091334   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.091345   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:26.091353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:26.091403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:26.123887   77396 cri.go:89] found id: ""
	I0828 18:24:26.123919   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.123929   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:26.123943   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:26.124004   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:26.156028   77396 cri.go:89] found id: ""
	I0828 18:24:26.156055   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.156063   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:26.156068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:26.156129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:26.186952   77396 cri.go:89] found id: ""
	I0828 18:24:26.186981   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.186989   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:26.186998   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:26.187008   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:26.234021   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:26.234065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:26.249052   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:26.249079   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:26.323382   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:26.323406   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:26.323421   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:26.408279   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:26.408306   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:28.950242   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:28.964886   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:28.964973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:28.999657   77396 cri.go:89] found id: ""
	I0828 18:24:28.999686   77396 logs.go:276] 0 containers: []
	W0828 18:24:28.999695   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:28.999701   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:28.999759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:29.036649   77396 cri.go:89] found id: ""
	I0828 18:24:29.036682   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.036691   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:29.036697   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:29.036758   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:29.071048   77396 cri.go:89] found id: ""
	I0828 18:24:29.071073   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.071083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:29.071090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:29.071149   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:29.106377   77396 cri.go:89] found id: ""
	I0828 18:24:29.106412   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.106423   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:29.106430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:29.106494   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:29.141150   77396 cri.go:89] found id: ""
	I0828 18:24:29.141183   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.141192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:29.141198   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:29.141261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:29.175977   77396 cri.go:89] found id: ""
	I0828 18:24:29.176007   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.176015   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:29.176022   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:29.176085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:29.209684   77396 cri.go:89] found id: ""
	I0828 18:24:29.209714   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.209725   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:29.209732   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:29.209791   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:29.244105   77396 cri.go:89] found id: ""
	I0828 18:24:29.244133   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.244143   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:29.244153   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:29.244168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:29.304288   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:29.304326   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:29.319606   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:29.319636   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:29.389101   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:29.389123   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:29.389135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:29.474129   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:29.474168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:26.401494   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.402117   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.402503   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.574724   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.575806   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:31.075079   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.743295   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.743355   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.018867   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:32.032399   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:32.032467   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:32.066994   77396 cri.go:89] found id: ""
	I0828 18:24:32.067023   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.067032   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:32.067038   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:32.067094   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:32.102133   77396 cri.go:89] found id: ""
	I0828 18:24:32.102164   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.102176   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:32.102183   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:32.102237   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:32.136427   77396 cri.go:89] found id: ""
	I0828 18:24:32.136450   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.136457   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:32.136463   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:32.136514   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.169993   77396 cri.go:89] found id: ""
	I0828 18:24:32.170026   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.170034   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:32.170040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:32.170114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:32.202191   77396 cri.go:89] found id: ""
	I0828 18:24:32.202218   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.202229   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:32.202236   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:32.202297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:32.241866   77396 cri.go:89] found id: ""
	I0828 18:24:32.241890   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.241900   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:32.241908   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:32.241980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:32.275919   77396 cri.go:89] found id: ""
	I0828 18:24:32.275949   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.275965   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:32.275972   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:32.276033   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:32.310958   77396 cri.go:89] found id: ""
	I0828 18:24:32.310991   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.311002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:32.311010   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:32.311023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:32.367619   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:32.367665   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:32.380676   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:32.380707   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:32.445626   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:32.445650   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:32.445668   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:32.528458   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:32.528493   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:35.070182   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:35.084599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:35.084707   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:35.120542   77396 cri.go:89] found id: ""
	I0828 18:24:35.120568   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.120578   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:35.120585   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:35.120644   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:35.159336   77396 cri.go:89] found id: ""
	I0828 18:24:35.159361   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.159372   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:35.159378   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:35.159445   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:35.197161   77396 cri.go:89] found id: ""
	I0828 18:24:35.197185   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.197196   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:35.197203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:35.197267   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.903836   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.401184   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:33.574441   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.574602   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.244147   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.744307   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.233507   77396 cri.go:89] found id: ""
	I0828 18:24:35.233533   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.233542   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:35.233548   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:35.233609   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:35.270403   77396 cri.go:89] found id: ""
	I0828 18:24:35.270440   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.270448   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:35.270454   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:35.270503   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:35.304119   77396 cri.go:89] found id: ""
	I0828 18:24:35.304141   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.304149   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:35.304155   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:35.304223   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:35.341477   77396 cri.go:89] found id: ""
	I0828 18:24:35.341507   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.341518   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:35.341525   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:35.341589   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:35.374180   77396 cri.go:89] found id: ""
	I0828 18:24:35.374207   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.374215   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:35.374224   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:35.374235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:35.428008   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:35.428041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:35.443131   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:35.443159   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:35.515296   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:35.515318   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:35.515332   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:35.590734   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:35.590765   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.129856   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:38.143354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:38.143413   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:38.174964   77396 cri.go:89] found id: ""
	I0828 18:24:38.174993   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.175004   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:38.175011   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:38.175083   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:38.211424   77396 cri.go:89] found id: ""
	I0828 18:24:38.211460   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.211471   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:38.211477   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:38.211533   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:38.244667   77396 cri.go:89] found id: ""
	I0828 18:24:38.244697   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.244712   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:38.244719   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:38.244779   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:38.277930   77396 cri.go:89] found id: ""
	I0828 18:24:38.277955   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.277963   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:38.277969   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:38.278020   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:38.311374   77396 cri.go:89] found id: ""
	I0828 18:24:38.311403   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.311413   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:38.311420   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:38.311477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:38.345467   77396 cri.go:89] found id: ""
	I0828 18:24:38.345496   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.345507   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:38.345515   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:38.345576   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:38.377554   77396 cri.go:89] found id: ""
	I0828 18:24:38.377584   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.377595   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:38.377613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:38.377675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:38.410101   77396 cri.go:89] found id: ""
	I0828 18:24:38.410132   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.410142   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:38.410151   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:38.410165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:38.422496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:38.422523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:38.486692   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:38.486715   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:38.486728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:38.567295   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:38.567331   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.605787   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:38.605820   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:37.402128   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.902663   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.574935   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.575447   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:40.243971   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.743768   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:41.159454   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:41.172776   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:41.172845   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:41.205430   77396 cri.go:89] found id: ""
	I0828 18:24:41.205459   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.205470   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:41.205477   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:41.205541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:41.238941   77396 cri.go:89] found id: ""
	I0828 18:24:41.238968   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.238978   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:41.238985   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:41.239047   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:41.276056   77396 cri.go:89] found id: ""
	I0828 18:24:41.276079   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.276086   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:41.276092   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:41.276140   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:41.309018   77396 cri.go:89] found id: ""
	I0828 18:24:41.309043   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.309051   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:41.309057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:41.309103   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:41.343279   77396 cri.go:89] found id: ""
	I0828 18:24:41.343301   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.343309   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:41.343314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:41.343360   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:41.376723   77396 cri.go:89] found id: ""
	I0828 18:24:41.376749   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.376756   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:41.376762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:41.376811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:41.411996   77396 cri.go:89] found id: ""
	I0828 18:24:41.412023   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.412034   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:41.412040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:41.412091   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:41.445988   77396 cri.go:89] found id: ""
	I0828 18:24:41.446016   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.446026   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:41.446037   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:41.446053   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:41.498760   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:41.498799   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:41.512383   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:41.512413   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:41.582469   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:41.582493   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:41.582506   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:41.658801   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:41.658836   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.195154   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:44.207904   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:44.207978   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:44.241620   77396 cri.go:89] found id: ""
	I0828 18:24:44.241649   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.241659   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:44.241667   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:44.241726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:44.277206   77396 cri.go:89] found id: ""
	I0828 18:24:44.277238   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.277248   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:44.277254   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:44.277313   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:44.314367   77396 cri.go:89] found id: ""
	I0828 18:24:44.314397   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.314407   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:44.314415   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:44.314473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:44.356384   77396 cri.go:89] found id: ""
	I0828 18:24:44.356417   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.356429   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:44.356436   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:44.356499   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:44.388781   77396 cri.go:89] found id: ""
	I0828 18:24:44.388804   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.388812   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:44.388818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:44.388864   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:44.422896   77396 cri.go:89] found id: ""
	I0828 18:24:44.422927   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.422939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:44.422946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:44.423000   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:44.457218   77396 cri.go:89] found id: ""
	I0828 18:24:44.457242   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.457250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:44.457256   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:44.457315   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:44.489819   77396 cri.go:89] found id: ""
	I0828 18:24:44.489846   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.489854   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:44.489874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:44.489886   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.526759   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:44.526789   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:44.578813   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:44.578844   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:44.592066   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:44.592105   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:44.655504   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:44.655528   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:44.655547   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:42.401964   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.901869   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.076081   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.576010   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:45.242907   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.244400   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.240915   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:47.253259   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:47.253324   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:47.287911   77396 cri.go:89] found id: ""
	I0828 18:24:47.287939   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.287950   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:47.287958   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:47.288017   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:47.319834   77396 cri.go:89] found id: ""
	I0828 18:24:47.319863   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.319871   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:47.319877   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:47.319947   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:47.356339   77396 cri.go:89] found id: ""
	I0828 18:24:47.356370   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.356395   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:47.356403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:47.356481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:47.388621   77396 cri.go:89] found id: ""
	I0828 18:24:47.388646   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.388656   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:47.388663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:47.388713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:47.422495   77396 cri.go:89] found id: ""
	I0828 18:24:47.422527   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.422537   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:47.422545   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:47.422614   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:47.458799   77396 cri.go:89] found id: ""
	I0828 18:24:47.458825   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.458833   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:47.458839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:47.458885   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:47.496184   77396 cri.go:89] found id: ""
	I0828 18:24:47.496215   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.496226   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:47.496233   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:47.496286   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:47.536283   77396 cri.go:89] found id: ""
	I0828 18:24:47.536311   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.536322   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:47.536333   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:47.536347   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:47.588024   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:47.588056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:47.600661   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:47.600727   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:47.669096   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:47.669124   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:47.669139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:47.753696   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:47.753725   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:46.902404   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.402357   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:46.576078   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.075244   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.744421   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:52.243878   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:50.293600   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:50.306623   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:50.306715   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:50.340416   77396 cri.go:89] found id: ""
	I0828 18:24:50.340448   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.340460   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:50.340468   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:50.340534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:50.375812   77396 cri.go:89] found id: ""
	I0828 18:24:50.375843   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.375854   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:50.375861   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:50.375924   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:50.414399   77396 cri.go:89] found id: ""
	I0828 18:24:50.414426   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.414435   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:50.414444   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:50.414512   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:50.451285   77396 cri.go:89] found id: ""
	I0828 18:24:50.451316   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.451328   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:50.451336   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:50.451404   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:50.487828   77396 cri.go:89] found id: ""
	I0828 18:24:50.487852   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.487863   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:50.487871   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:50.487929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:50.520989   77396 cri.go:89] found id: ""
	I0828 18:24:50.521015   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.521023   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:50.521028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:50.521086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:50.553231   77396 cri.go:89] found id: ""
	I0828 18:24:50.553262   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.553271   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:50.553277   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:50.553332   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:50.588612   77396 cri.go:89] found id: ""
	I0828 18:24:50.588644   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.588654   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:50.588663   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:50.588674   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:50.642018   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:50.642065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:50.655887   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:50.655918   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:50.721935   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:50.721964   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:50.721980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:50.802009   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:50.802049   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:53.344650   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:53.357952   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:53.358011   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:53.393369   77396 cri.go:89] found id: ""
	I0828 18:24:53.393399   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.393408   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:53.393413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:53.393475   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:53.425918   77396 cri.go:89] found id: ""
	I0828 18:24:53.425947   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.425958   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:53.425965   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:53.426018   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:53.461827   77396 cri.go:89] found id: ""
	I0828 18:24:53.461857   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.461867   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:53.461874   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:53.461966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:53.494323   77396 cri.go:89] found id: ""
	I0828 18:24:53.494353   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.494363   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:53.494370   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:53.494430   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:53.531687   77396 cri.go:89] found id: ""
	I0828 18:24:53.531715   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.531726   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:53.531733   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:53.531789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:53.565794   77396 cri.go:89] found id: ""
	I0828 18:24:53.565819   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.565829   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:53.565838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:53.565894   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:53.601666   77396 cri.go:89] found id: ""
	I0828 18:24:53.601699   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.601710   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:53.601717   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:53.601782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:53.641268   77396 cri.go:89] found id: ""
	I0828 18:24:53.641302   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.641315   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:53.641332   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:53.641363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:53.695496   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:53.695532   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:53.708691   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:53.708722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:53.779280   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:53.779307   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:53.779320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:53.859258   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:53.859295   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:51.402746   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.403126   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:51.575165   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.575930   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:55.576188   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:54.243984   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.743976   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.403005   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:56.416305   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:56.416376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:56.448916   77396 cri.go:89] found id: ""
	I0828 18:24:56.448944   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.448955   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:56.448962   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:56.449022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:56.483870   77396 cri.go:89] found id: ""
	I0828 18:24:56.483897   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.483905   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:56.483910   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:56.483970   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:56.516615   77396 cri.go:89] found id: ""
	I0828 18:24:56.516642   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.516649   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:56.516655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:56.516712   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:56.551561   77396 cri.go:89] found id: ""
	I0828 18:24:56.551584   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.551591   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:56.551599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:56.551668   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:56.586089   77396 cri.go:89] found id: ""
	I0828 18:24:56.586120   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.586130   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:56.586138   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:56.586197   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:56.617988   77396 cri.go:89] found id: ""
	I0828 18:24:56.618018   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.618028   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:56.618034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:56.618111   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:56.664493   77396 cri.go:89] found id: ""
	I0828 18:24:56.664526   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.664535   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:56.664540   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:56.664601   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:56.698191   77396 cri.go:89] found id: ""
	I0828 18:24:56.698217   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.698228   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:56.698237   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:56.698251   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:56.747197   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:56.747225   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:56.760236   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:56.760262   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:56.831931   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:56.831955   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:56.831969   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:56.908578   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:56.908621   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:59.450148   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:59.464476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:59.464548   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:59.500934   77396 cri.go:89] found id: ""
	I0828 18:24:59.500956   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.500965   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:59.500970   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:59.501019   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:59.532711   77396 cri.go:89] found id: ""
	I0828 18:24:59.532740   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.532747   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:59.532753   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:59.532802   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:59.564974   77396 cri.go:89] found id: ""
	I0828 18:24:59.565001   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.565009   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:59.565016   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:59.565073   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:59.597924   77396 cri.go:89] found id: ""
	I0828 18:24:59.597957   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.597967   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:59.597975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:59.598030   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:59.630179   77396 cri.go:89] found id: ""
	I0828 18:24:59.630207   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.630216   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:59.630222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:59.630279   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:59.664755   77396 cri.go:89] found id: ""
	I0828 18:24:59.664783   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.664793   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:59.664800   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:59.664860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:59.701556   77396 cri.go:89] found id: ""
	I0828 18:24:59.701581   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.701590   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:59.701596   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:59.701646   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:59.733387   77396 cri.go:89] found id: ""
	I0828 18:24:59.733422   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.733430   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:59.733439   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:59.733450   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:59.780962   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:59.780994   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:59.795998   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:59.796034   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:59.864864   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:59.864886   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:59.864902   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:59.941914   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:59.941957   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:55.901611   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:57.902218   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.902364   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:58.076387   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:00.575268   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.243885   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:01.742980   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.480133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:02.492804   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:02.492863   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:02.525573   77396 cri.go:89] found id: ""
	I0828 18:25:02.525600   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.525609   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:02.525614   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:02.525675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:02.558640   77396 cri.go:89] found id: ""
	I0828 18:25:02.558670   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.558680   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:02.558687   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:02.558746   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:02.598803   77396 cri.go:89] found id: ""
	I0828 18:25:02.598838   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.598851   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:02.598860   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:02.598931   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:02.634067   77396 cri.go:89] found id: ""
	I0828 18:25:02.634110   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.634121   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:02.634128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:02.634188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:02.671495   77396 cri.go:89] found id: ""
	I0828 18:25:02.671520   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.671529   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:02.671536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:02.671595   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:02.704478   77396 cri.go:89] found id: ""
	I0828 18:25:02.704510   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.704522   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:02.704530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:02.704591   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:02.736799   77396 cri.go:89] found id: ""
	I0828 18:25:02.736831   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.736840   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:02.736846   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:02.736905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:02.770820   77396 cri.go:89] found id: ""
	I0828 18:25:02.770846   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.770856   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:02.770866   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:02.770885   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:02.848618   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:02.848645   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:02.848662   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:02.924704   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:02.924738   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:02.960776   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:02.960811   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:03.011600   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:03.011645   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:02.402547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:04.903615   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.576294   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.075828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:03.743629   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.744476   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:08.243316   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.527662   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:05.540652   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:05.540737   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:05.574620   77396 cri.go:89] found id: ""
	I0828 18:25:05.574650   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.574660   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:05.574668   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:05.574729   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:05.607594   77396 cri.go:89] found id: ""
	I0828 18:25:05.607621   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.607629   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:05.607634   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:05.607691   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:05.650792   77396 cri.go:89] found id: ""
	I0828 18:25:05.650823   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.650833   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:05.650841   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:05.650909   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:05.684453   77396 cri.go:89] found id: ""
	I0828 18:25:05.684481   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.684492   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:05.684499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:05.684564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:05.717875   77396 cri.go:89] found id: ""
	I0828 18:25:05.717904   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.717914   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:05.717921   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:05.717980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:05.754114   77396 cri.go:89] found id: ""
	I0828 18:25:05.754143   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.754155   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:05.754163   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:05.754220   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:05.786354   77396 cri.go:89] found id: ""
	I0828 18:25:05.786399   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.786411   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:05.786418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:05.786473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:05.818108   77396 cri.go:89] found id: ""
	I0828 18:25:05.818134   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.818141   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:05.818149   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:05.818164   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:05.868731   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:05.868762   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:05.882333   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:05.882360   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:05.951978   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:05.952003   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:05.952015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:06.028537   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:06.028573   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:08.567011   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:08.580607   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:08.580675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:08.613821   77396 cri.go:89] found id: ""
	I0828 18:25:08.613847   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.613858   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:08.613865   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:08.613929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:08.648994   77396 cri.go:89] found id: ""
	I0828 18:25:08.649021   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.649030   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:08.649036   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:08.649084   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:08.680804   77396 cri.go:89] found id: ""
	I0828 18:25:08.680829   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.680837   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:08.680844   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:08.680903   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:08.717926   77396 cri.go:89] found id: ""
	I0828 18:25:08.717962   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.717973   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:08.717980   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:08.718043   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:08.751928   77396 cri.go:89] found id: ""
	I0828 18:25:08.751957   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.751967   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:08.751975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:08.752037   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:08.791400   77396 cri.go:89] found id: ""
	I0828 18:25:08.791423   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.791432   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:08.791437   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:08.791497   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:08.828072   77396 cri.go:89] found id: ""
	I0828 18:25:08.828106   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.828118   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:08.828125   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:08.828190   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:08.881175   77396 cri.go:89] found id: ""
	I0828 18:25:08.881204   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.881216   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:08.881226   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:08.881241   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:08.970432   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:08.970469   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:09.006975   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:09.007002   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:09.059881   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:09.059919   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:09.073543   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:09.073567   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:09.143468   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:07.403012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.901414   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:07.075904   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.077674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:10.244567   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:12.742811   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.644356   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:11.657229   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:11.657297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:11.695036   77396 cri.go:89] found id: ""
	I0828 18:25:11.695059   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.695067   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:11.695073   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:11.695123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:11.726524   77396 cri.go:89] found id: ""
	I0828 18:25:11.726548   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.726556   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:11.726561   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:11.726608   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:11.759249   77396 cri.go:89] found id: ""
	I0828 18:25:11.759278   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.759289   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:11.759296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:11.759356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:11.794109   77396 cri.go:89] found id: ""
	I0828 18:25:11.794154   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.794163   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:11.794169   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:11.794221   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:11.828378   77396 cri.go:89] found id: ""
	I0828 18:25:11.828403   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.828411   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:11.828416   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:11.828470   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:11.864009   77396 cri.go:89] found id: ""
	I0828 18:25:11.864035   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.864043   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:11.864049   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:11.864108   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:11.895844   77396 cri.go:89] found id: ""
	I0828 18:25:11.895870   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.895878   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:11.895883   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:11.895932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:11.932149   77396 cri.go:89] found id: ""
	I0828 18:25:11.932180   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.932190   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:11.932208   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:11.932222   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:11.982478   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:11.982514   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:11.995466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:11.995498   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:12.058507   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:12.058531   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:12.058546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:12.138225   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:12.138260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:14.675970   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:14.688744   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:14.688811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:14.720771   77396 cri.go:89] found id: ""
	I0828 18:25:14.720795   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.720803   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:14.720808   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:14.720855   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:14.754047   77396 cri.go:89] found id: ""
	I0828 18:25:14.754071   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.754095   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:14.754103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:14.754159   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:14.789214   77396 cri.go:89] found id: ""
	I0828 18:25:14.789244   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.789256   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:14.789263   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:14.789331   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:14.822366   77396 cri.go:89] found id: ""
	I0828 18:25:14.822399   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.822411   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:14.822419   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:14.822489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:14.855905   77396 cri.go:89] found id: ""
	I0828 18:25:14.855932   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.855942   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:14.855949   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:14.856007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:14.889492   77396 cri.go:89] found id: ""
	I0828 18:25:14.889519   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.889529   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:14.889536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:14.889594   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:14.923892   77396 cri.go:89] found id: ""
	I0828 18:25:14.923921   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.923932   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:14.923940   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:14.923998   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:14.954979   77396 cri.go:89] found id: ""
	I0828 18:25:14.955002   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.955009   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:14.955017   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:14.955029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:15.006233   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:15.006266   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:15.019702   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:15.019729   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:15.090916   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:15.090943   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:15.090959   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:15.166150   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:15.166190   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:11.902996   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.402539   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.574819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:13.575405   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:16.074386   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.743486   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.243491   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.703473   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:17.716353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:17.716440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:17.750334   77396 cri.go:89] found id: ""
	I0828 18:25:17.750367   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.750376   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:17.750382   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:17.750440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:17.783429   77396 cri.go:89] found id: ""
	I0828 18:25:17.783475   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.783488   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:17.783496   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:17.783561   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:17.819014   77396 cri.go:89] found id: ""
	I0828 18:25:17.819041   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.819052   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:17.819060   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:17.819118   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:17.856138   77396 cri.go:89] found id: ""
	I0828 18:25:17.856168   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.856179   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:17.856186   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:17.856248   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:17.891579   77396 cri.go:89] found id: ""
	I0828 18:25:17.891611   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.891619   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:17.891626   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:17.891687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:17.924709   77396 cri.go:89] found id: ""
	I0828 18:25:17.924771   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.924798   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:17.924808   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:17.924874   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:17.955875   77396 cri.go:89] found id: ""
	I0828 18:25:17.955903   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.955913   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:17.955920   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:17.955977   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:17.993827   77396 cri.go:89] found id: ""
	I0828 18:25:17.993861   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.993872   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:17.993882   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:17.993897   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:18.046501   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:18.046534   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:18.060008   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:18.060040   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:18.128546   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:18.128567   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:18.128582   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:18.204859   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:18.204896   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:16.901986   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.902594   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.076564   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.575785   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:19.243545   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:21.244384   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.745360   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:20.759428   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:20.759511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:20.794748   77396 cri.go:89] found id: ""
	I0828 18:25:20.794780   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.794789   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:20.794794   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:20.794843   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:20.834595   77396 cri.go:89] found id: ""
	I0828 18:25:20.834623   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.834636   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:20.834642   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:20.834720   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:20.870609   77396 cri.go:89] found id: ""
	I0828 18:25:20.870636   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.870646   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:20.870653   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:20.870710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:20.903739   77396 cri.go:89] found id: ""
	I0828 18:25:20.903764   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.903774   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:20.903782   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:20.903841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:20.937331   77396 cri.go:89] found id: ""
	I0828 18:25:20.937360   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.937367   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:20.937373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:20.937424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:20.971140   77396 cri.go:89] found id: ""
	I0828 18:25:20.971169   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.971178   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:20.971184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:20.971231   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:21.002714   77396 cri.go:89] found id: ""
	I0828 18:25:21.002743   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.002753   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:21.002761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:21.002833   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:21.034802   77396 cri.go:89] found id: ""
	I0828 18:25:21.034827   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.034837   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:21.034848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:21.034862   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:21.091088   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:21.091128   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:21.103535   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:21.103569   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:21.177175   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:21.177202   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:21.177217   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:21.257125   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:21.257161   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:23.797074   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:23.810097   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:23.810171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:23.843943   77396 cri.go:89] found id: ""
	I0828 18:25:23.843972   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.843984   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:23.843991   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:23.844054   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:23.879872   77396 cri.go:89] found id: ""
	I0828 18:25:23.879906   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.879918   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:23.879926   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:23.879985   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:23.914109   77396 cri.go:89] found id: ""
	I0828 18:25:23.914136   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.914145   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:23.914153   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:23.914200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:23.952672   77396 cri.go:89] found id: ""
	I0828 18:25:23.952700   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.952708   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:23.952714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:23.952759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:23.986813   77396 cri.go:89] found id: ""
	I0828 18:25:23.986839   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.986855   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:23.986861   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:23.986917   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:24.019358   77396 cri.go:89] found id: ""
	I0828 18:25:24.019387   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.019396   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:24.019413   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:24.019487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:24.053389   77396 cri.go:89] found id: ""
	I0828 18:25:24.053415   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.053423   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:24.053429   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:24.053477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:24.086618   77396 cri.go:89] found id: ""
	I0828 18:25:24.086652   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.086660   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:24.086667   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:24.086677   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:24.136243   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:24.136277   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:24.150031   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:24.150071   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:24.229689   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:24.229729   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:24.229746   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:24.307152   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:24.307197   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:20.902691   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.401748   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:22.575828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.075159   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.743296   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.743656   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.243947   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:26.844828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:26.858915   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:26.858989   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:26.896094   77396 cri.go:89] found id: ""
	I0828 18:25:26.896123   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.896132   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:26.896138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:26.896187   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:26.934896   77396 cri.go:89] found id: ""
	I0828 18:25:26.934925   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.934936   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:26.934944   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:26.935007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:26.967673   77396 cri.go:89] found id: ""
	I0828 18:25:26.967700   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.967708   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:26.967714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:26.967780   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:27.000095   77396 cri.go:89] found id: ""
	I0828 18:25:27.000124   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.000133   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:27.000140   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:27.000192   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:27.038158   77396 cri.go:89] found id: ""
	I0828 18:25:27.038186   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.038195   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:27.038201   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:27.038253   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:27.073606   77396 cri.go:89] found id: ""
	I0828 18:25:27.073634   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.073649   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:27.073657   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:27.073713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:27.105139   77396 cri.go:89] found id: ""
	I0828 18:25:27.105163   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.105176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:27.105182   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:27.105235   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:27.137985   77396 cri.go:89] found id: ""
	I0828 18:25:27.138014   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.138025   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:27.138036   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:27.138055   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:27.187983   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:27.188018   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:27.200260   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:27.200286   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:27.273005   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:27.273026   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:27.273038   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:27.353333   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:27.353375   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:29.890515   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:29.903924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:29.903994   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:29.936189   77396 cri.go:89] found id: ""
	I0828 18:25:29.936221   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.936231   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:29.936240   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:29.936354   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:29.968319   77396 cri.go:89] found id: ""
	I0828 18:25:29.968349   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.968359   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:29.968366   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:29.968436   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:30.001331   77396 cri.go:89] found id: ""
	I0828 18:25:30.001358   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.001383   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:30.001391   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:30.001477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:30.035610   77396 cri.go:89] found id: ""
	I0828 18:25:30.035634   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.035642   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:30.035648   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:30.035695   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:30.067304   77396 cri.go:89] found id: ""
	I0828 18:25:30.067335   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.067346   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:30.067354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:30.067429   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:30.105020   77396 cri.go:89] found id: ""
	I0828 18:25:30.105049   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.105057   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:30.105063   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:30.105126   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:30.142048   77396 cri.go:89] found id: ""
	I0828 18:25:30.142097   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.142110   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:30.142117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:30.142180   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:30.173099   77396 cri.go:89] found id: ""
	I0828 18:25:30.173131   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.173140   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:30.173149   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:30.173166   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:25:25.901875   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.401339   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.402248   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:27.076181   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:29.575216   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.743526   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:33.242940   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:25:30.238946   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:30.238968   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:30.238980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:30.320484   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:30.320523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:30.360028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:30.360056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:30.412663   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:30.412697   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:32.927100   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:32.940555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:32.940636   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:32.973182   77396 cri.go:89] found id: ""
	I0828 18:25:32.973221   77396 logs.go:276] 0 containers: []
	W0828 18:25:32.973233   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:32.973242   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:32.973303   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:33.006096   77396 cri.go:89] found id: ""
	I0828 18:25:33.006125   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.006134   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:33.006139   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:33.006191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:33.038430   77396 cri.go:89] found id: ""
	I0828 18:25:33.038461   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.038472   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:33.038480   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:33.038542   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:33.070266   77396 cri.go:89] found id: ""
	I0828 18:25:33.070294   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.070303   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:33.070315   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:33.070375   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:33.105248   77396 cri.go:89] found id: ""
	I0828 18:25:33.105278   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.105289   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:33.105296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:33.105356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:33.136507   77396 cri.go:89] found id: ""
	I0828 18:25:33.136540   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.136551   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:33.136559   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:33.136618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:33.167333   77396 cri.go:89] found id: ""
	I0828 18:25:33.167359   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.167370   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:33.167377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:33.167442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:33.201302   77396 cri.go:89] found id: ""
	I0828 18:25:33.201331   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.201343   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:33.201352   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:33.201364   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:33.213335   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:33.213361   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:33.278269   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:33.278296   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:33.278310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:33.357015   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:33.357048   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:33.401463   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:33.401495   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:32.402583   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.402749   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:32.075671   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.575951   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.743215   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.243081   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.952911   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:35.965925   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:35.965990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:36.001656   77396 cri.go:89] found id: ""
	I0828 18:25:36.001693   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.001705   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:36.001713   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:36.001784   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:36.035010   77396 cri.go:89] found id: ""
	I0828 18:25:36.035037   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.035045   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:36.035050   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:36.035099   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:36.069113   77396 cri.go:89] found id: ""
	I0828 18:25:36.069148   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.069158   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:36.069164   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:36.069219   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:36.106200   77396 cri.go:89] found id: ""
	I0828 18:25:36.106230   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.106240   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:36.106248   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:36.106316   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:36.138428   77396 cri.go:89] found id: ""
	I0828 18:25:36.138457   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.138468   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:36.138475   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:36.138559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:36.170084   77396 cri.go:89] found id: ""
	I0828 18:25:36.170112   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.170122   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:36.170128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:36.170188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:36.202180   77396 cri.go:89] found id: ""
	I0828 18:25:36.202205   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.202215   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:36.202222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:36.202285   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:36.236125   77396 cri.go:89] found id: ""
	I0828 18:25:36.236156   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.236167   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:36.236179   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:36.236193   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:36.274230   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:36.274256   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:36.325505   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:36.325546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:36.338714   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:36.338741   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:36.406404   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:36.406432   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:36.406448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:38.981942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:38.995287   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:38.995357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:39.028250   77396 cri.go:89] found id: ""
	I0828 18:25:39.028275   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.028282   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:39.028289   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:39.028335   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:39.061402   77396 cri.go:89] found id: ""
	I0828 18:25:39.061434   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.061444   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:39.061449   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:39.061501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:39.095672   77396 cri.go:89] found id: ""
	I0828 18:25:39.095704   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.095716   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:39.095729   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:39.095789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:39.130135   77396 cri.go:89] found id: ""
	I0828 18:25:39.130162   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.130170   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:39.130176   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:39.130239   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:39.168529   77396 cri.go:89] found id: ""
	I0828 18:25:39.168560   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.168571   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:39.168578   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:39.168641   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:39.200786   77396 cri.go:89] found id: ""
	I0828 18:25:39.200813   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.200821   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:39.200828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:39.200876   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:39.232855   77396 cri.go:89] found id: ""
	I0828 18:25:39.232886   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.232894   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:39.232902   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:39.232966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:39.267241   77396 cri.go:89] found id: ""
	I0828 18:25:39.267273   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.267284   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:39.267294   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:39.267309   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:39.306023   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:39.306061   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:39.357880   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:39.357931   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:39.370886   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:39.370914   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:39.448130   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:39.448151   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:39.448163   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:36.403245   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.902238   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:37.075570   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:39.076792   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:40.243633   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.244395   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.027111   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:42.039611   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:42.039687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:42.078052   77396 cri.go:89] found id: ""
	I0828 18:25:42.078093   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.078104   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:42.078111   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:42.078169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:42.112812   77396 cri.go:89] found id: ""
	I0828 18:25:42.112842   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.112851   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:42.112856   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:42.112902   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:42.146846   77396 cri.go:89] found id: ""
	I0828 18:25:42.146875   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.146884   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:42.146891   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:42.146948   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:42.179311   77396 cri.go:89] found id: ""
	I0828 18:25:42.179344   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.179352   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:42.179358   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:42.179422   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:42.212149   77396 cri.go:89] found id: ""
	I0828 18:25:42.212179   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.212192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:42.212200   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:42.212254   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:42.248322   77396 cri.go:89] found id: ""
	I0828 18:25:42.248358   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.248369   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:42.248382   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:42.248496   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:42.283212   77396 cri.go:89] found id: ""
	I0828 18:25:42.283241   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.283250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:42.283257   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:42.283318   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:42.327064   77396 cri.go:89] found id: ""
	I0828 18:25:42.327099   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.327110   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:42.327121   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:42.327135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:42.378545   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:42.378577   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:42.392020   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:42.392045   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:42.464531   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:42.464553   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:42.464564   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:42.543116   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:42.543162   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:45.083935   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:45.096434   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:45.096501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:45.130059   77396 cri.go:89] found id: ""
	I0828 18:25:45.130098   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.130110   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:45.130117   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:45.130176   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:45.160982   77396 cri.go:89] found id: ""
	I0828 18:25:45.161011   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.161021   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:45.161028   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:45.161086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:45.191416   77396 cri.go:89] found id: ""
	I0828 18:25:45.191449   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.191460   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:45.191467   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:45.191524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:41.401456   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:43.401666   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.401772   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:41.575819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.075020   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.743053   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:47.242714   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.223315   77396 cri.go:89] found id: ""
	I0828 18:25:45.223344   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.223360   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:45.223368   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:45.223421   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:45.255404   77396 cri.go:89] found id: ""
	I0828 18:25:45.255428   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.255435   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:45.255441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:45.255487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:45.294671   77396 cri.go:89] found id: ""
	I0828 18:25:45.294705   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.294716   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:45.294724   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:45.294811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:45.329148   77396 cri.go:89] found id: ""
	I0828 18:25:45.329174   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.329186   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:45.329191   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:45.329249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:45.361976   77396 cri.go:89] found id: ""
	I0828 18:25:45.362007   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.362018   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:45.362028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:45.362041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:45.412495   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:45.412530   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:45.425268   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:45.425302   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:45.493451   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:45.493475   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:45.493489   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:45.571427   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:45.571472   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.108133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:48.120632   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:48.120699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:48.156933   77396 cri.go:89] found id: ""
	I0828 18:25:48.156963   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.156973   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:48.156981   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:48.157045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:48.188436   77396 cri.go:89] found id: ""
	I0828 18:25:48.188465   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.188473   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:48.188479   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:48.188524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:48.219558   77396 cri.go:89] found id: ""
	I0828 18:25:48.219588   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.219598   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:48.219605   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:48.219661   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:48.252872   77396 cri.go:89] found id: ""
	I0828 18:25:48.252901   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.252917   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:48.252923   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:48.252975   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:48.288244   77396 cri.go:89] found id: ""
	I0828 18:25:48.288273   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.288283   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:48.288291   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:48.288355   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:48.325077   77396 cri.go:89] found id: ""
	I0828 18:25:48.325114   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.325126   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:48.325134   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:48.325195   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:48.358163   77396 cri.go:89] found id: ""
	I0828 18:25:48.358191   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.358202   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:48.358210   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:48.358259   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:48.409246   77396 cri.go:89] found id: ""
	I0828 18:25:48.409277   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.409287   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:48.409299   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:48.409314   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:48.425228   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:48.425259   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:48.493169   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:48.493188   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:48.493201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:48.573486   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:48.573524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.615846   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:48.615879   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:47.901530   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.901707   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:46.574662   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:48.575614   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.075530   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.244444   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.744518   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.165546   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:51.178743   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:51.178807   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:51.214299   77396 cri.go:89] found id: ""
	I0828 18:25:51.214329   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.214340   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:51.214349   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:51.214426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:51.247057   77396 cri.go:89] found id: ""
	I0828 18:25:51.247086   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.247096   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:51.247103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:51.247174   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:51.279381   77396 cri.go:89] found id: ""
	I0828 18:25:51.279413   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.279423   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:51.279430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:51.279492   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:51.314237   77396 cri.go:89] found id: ""
	I0828 18:25:51.314266   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.314277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:51.314286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:51.314352   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:51.347496   77396 cri.go:89] found id: ""
	I0828 18:25:51.347518   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.347526   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:51.347532   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:51.347578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:51.381705   77396 cri.go:89] found id: ""
	I0828 18:25:51.381742   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.381753   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:51.381762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:51.381816   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:51.413157   77396 cri.go:89] found id: ""
	I0828 18:25:51.413186   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.413196   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:51.413203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:51.413261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:51.443228   77396 cri.go:89] found id: ""
	I0828 18:25:51.443251   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.443266   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:51.443274   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:51.443287   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:51.490927   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:51.490961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:51.505308   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:51.505334   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:51.572077   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:51.572109   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:51.572125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:51.658398   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:51.658441   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:54.199638   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:54.213449   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:54.213525   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:54.249698   77396 cri.go:89] found id: ""
	I0828 18:25:54.249720   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.249727   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:54.249733   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:54.249782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:54.285235   77396 cri.go:89] found id: ""
	I0828 18:25:54.285267   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.285279   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:54.285287   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:54.285344   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:54.322535   77396 cri.go:89] found id: ""
	I0828 18:25:54.322562   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.322571   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:54.322577   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:54.322640   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:54.357995   77396 cri.go:89] found id: ""
	I0828 18:25:54.358025   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.358036   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:54.358045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:54.358129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:54.391112   77396 cri.go:89] found id: ""
	I0828 18:25:54.391137   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.391145   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:54.391150   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:54.391213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:54.424248   77396 cri.go:89] found id: ""
	I0828 18:25:54.424278   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.424288   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:54.424295   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:54.424357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:54.456529   77396 cri.go:89] found id: ""
	I0828 18:25:54.456553   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.456561   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:54.456566   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:54.456619   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:54.489226   77396 cri.go:89] found id: ""
	I0828 18:25:54.489251   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.489259   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:54.489268   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:54.489283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:54.544282   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:54.544318   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:54.557511   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:54.557549   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:54.631057   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:54.631081   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:54.631096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:54.711874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:54.711910   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:51.902237   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.402216   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:53.076058   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:55.577768   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.244062   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:56.244857   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:57.251826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:57.264806   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:57.264872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:57.300005   77396 cri.go:89] found id: ""
	I0828 18:25:57.300031   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.300041   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:57.300049   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:57.300128   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:57.333070   77396 cri.go:89] found id: ""
	I0828 18:25:57.333099   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.333110   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:57.333117   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:57.333181   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:57.369343   77396 cri.go:89] found id: ""
	I0828 18:25:57.369372   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.369390   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:57.369398   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:57.369462   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:57.401729   77396 cri.go:89] found id: ""
	I0828 18:25:57.401756   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.401764   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:57.401770   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:57.401824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:57.432890   77396 cri.go:89] found id: ""
	I0828 18:25:57.432914   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.432921   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:57.432927   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:57.432973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:57.467572   77396 cri.go:89] found id: ""
	I0828 18:25:57.467596   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.467604   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:57.467609   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:57.467663   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:57.500316   77396 cri.go:89] found id: ""
	I0828 18:25:57.500344   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.500351   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:57.500357   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:57.500411   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:57.531676   77396 cri.go:89] found id: ""
	I0828 18:25:57.531700   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.531708   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:57.531716   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:57.531728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:57.604613   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:57.604639   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:57.604653   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:57.684622   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:57.684658   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:57.720566   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:57.720656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:57.770832   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:57.770866   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:56.902012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:59.402189   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.075045   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.575328   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.743586   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.743675   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:01.737703   76435 pod_ready.go:82] duration metric: took 4m0.000480749s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:01.737748   76435 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0828 18:26:01.737772   76435 pod_ready.go:39] duration metric: took 4m13.763880094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:01.737804   76435 kubeadm.go:597] duration metric: took 4m22.607627094s to restartPrimaryControlPlane
	W0828 18:26:01.737875   76435 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:01.737908   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:00.283493   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:00.296500   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:00.296578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:00.334395   77396 cri.go:89] found id: ""
	I0828 18:26:00.334420   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.334428   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:00.334434   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:00.334481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:00.369178   77396 cri.go:89] found id: ""
	I0828 18:26:00.369205   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.369214   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:00.369219   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:00.369283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:00.405962   77396 cri.go:89] found id: ""
	I0828 18:26:00.405990   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.406000   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:00.406007   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:00.406064   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:00.438684   77396 cri.go:89] found id: ""
	I0828 18:26:00.438717   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.438728   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:00.438735   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:00.438795   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:00.472357   77396 cri.go:89] found id: ""
	I0828 18:26:00.472385   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.472397   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:00.472403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:00.472450   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:00.506891   77396 cri.go:89] found id: ""
	I0828 18:26:00.506920   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.506931   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:00.506938   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:00.506999   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:00.546387   77396 cri.go:89] found id: ""
	I0828 18:26:00.546413   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.546422   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:00.546427   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:00.546474   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:00.598714   77396 cri.go:89] found id: ""
	I0828 18:26:00.598745   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.598753   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:00.598761   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:00.598779   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:00.617100   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:00.617130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:00.687317   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:00.687348   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:00.687363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:00.770097   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:00.770130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:00.815848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:00.815883   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:03.365469   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:03.379117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:03.379182   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:03.414122   77396 cri.go:89] found id: ""
	I0828 18:26:03.414148   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.414155   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:03.414161   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:03.414208   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:03.446953   77396 cri.go:89] found id: ""
	I0828 18:26:03.446975   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.446983   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:03.446988   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:03.447036   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:03.481034   77396 cri.go:89] found id: ""
	I0828 18:26:03.481059   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.481067   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:03.481072   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:03.481120   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:03.514785   77396 cri.go:89] found id: ""
	I0828 18:26:03.514814   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.514824   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:03.514832   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:03.514888   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:03.548302   77396 cri.go:89] found id: ""
	I0828 18:26:03.548330   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.548340   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:03.548348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:03.548423   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:03.582430   77396 cri.go:89] found id: ""
	I0828 18:26:03.582460   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.582469   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:03.582476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:03.582529   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:03.615108   77396 cri.go:89] found id: ""
	I0828 18:26:03.615136   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.615144   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:03.615149   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:03.615205   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:03.647282   77396 cri.go:89] found id: ""
	I0828 18:26:03.647312   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.647321   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:03.647330   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:03.647340   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:03.660466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:03.660500   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:03.732746   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:03.732767   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:03.732780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:03.811286   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:03.811320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:03.848482   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:03.848513   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:01.402393   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.402670   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.403016   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.075650   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.574825   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:06.400122   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:06.412839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:06.412908   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:06.448570   77396 cri.go:89] found id: ""
	I0828 18:26:06.448597   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.448608   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:06.448620   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:06.448687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:06.482446   77396 cri.go:89] found id: ""
	I0828 18:26:06.482476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.482487   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:06.482495   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:06.482555   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:06.514640   77396 cri.go:89] found id: ""
	I0828 18:26:06.514669   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.514679   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:06.514686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:06.514747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:06.548997   77396 cri.go:89] found id: ""
	I0828 18:26:06.549020   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.549028   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:06.549034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:06.549079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:06.583557   77396 cri.go:89] found id: ""
	I0828 18:26:06.583582   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.583589   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:06.583595   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:06.583665   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:06.617447   77396 cri.go:89] found id: ""
	I0828 18:26:06.617476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.617484   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:06.617490   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:06.617549   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:06.650387   77396 cri.go:89] found id: ""
	I0828 18:26:06.650419   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.650427   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:06.650433   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:06.650489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:06.682851   77396 cri.go:89] found id: ""
	I0828 18:26:06.682879   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.682888   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:06.682899   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:06.682961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:06.695365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:06.695392   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:06.760214   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:06.760245   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:06.760261   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:06.839827   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:06.839863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:06.877298   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:06.877325   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.430694   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:09.443043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:09.443115   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:09.476557   77396 cri.go:89] found id: ""
	I0828 18:26:09.476583   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.476594   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:09.476602   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:09.476659   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:09.514909   77396 cri.go:89] found id: ""
	I0828 18:26:09.514935   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.514943   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:09.514948   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:09.515009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:09.549769   77396 cri.go:89] found id: ""
	I0828 18:26:09.549800   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.549810   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:09.549818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:09.549868   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:09.582793   77396 cri.go:89] found id: ""
	I0828 18:26:09.582821   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.582831   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:09.582838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:09.582896   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:09.615603   77396 cri.go:89] found id: ""
	I0828 18:26:09.615636   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.615648   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:09.615655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:09.615716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:09.650046   77396 cri.go:89] found id: ""
	I0828 18:26:09.650087   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.650100   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:09.650108   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:09.650161   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:09.681726   77396 cri.go:89] found id: ""
	I0828 18:26:09.681754   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.681763   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:09.681768   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:09.681821   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:09.713008   77396 cri.go:89] found id: ""
	I0828 18:26:09.713036   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.713045   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:09.713054   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:09.713065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:09.792720   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:09.792757   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:09.831752   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:09.831785   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.880877   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:09.880913   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:09.896178   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:09.896215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:09.962282   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:07.901074   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:09.905185   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:08.074185   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:10.075331   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.462957   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:12.475266   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:12.475345   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:12.508364   77396 cri.go:89] found id: ""
	I0828 18:26:12.508394   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.508405   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:12.508413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:12.508472   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:12.544152   77396 cri.go:89] found id: ""
	I0828 18:26:12.544185   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.544197   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:12.544204   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:12.544264   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:12.578358   77396 cri.go:89] found id: ""
	I0828 18:26:12.578384   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.578394   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:12.578403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:12.578456   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:12.609183   77396 cri.go:89] found id: ""
	I0828 18:26:12.609206   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.609214   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:12.609219   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:12.609292   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:12.641791   77396 cri.go:89] found id: ""
	I0828 18:26:12.641816   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.641824   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:12.641830   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:12.641887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:12.673857   77396 cri.go:89] found id: ""
	I0828 18:26:12.673881   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.673889   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:12.673894   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:12.673938   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:12.709501   77396 cri.go:89] found id: ""
	I0828 18:26:12.709525   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.709532   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:12.709538   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:12.709585   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:12.742972   77396 cri.go:89] found id: ""
	I0828 18:26:12.742994   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.743002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:12.743010   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:12.743026   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:12.813949   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:12.813969   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:12.813980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:12.894829   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:12.894873   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:12.939533   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:12.939565   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:12.990319   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:12.990358   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:12.404061   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:14.902346   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.575908   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.075489   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.503923   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:15.518161   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:15.518240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:15.564145   77396 cri.go:89] found id: ""
	I0828 18:26:15.564173   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.564182   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:15.564189   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:15.564249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:15.600654   77396 cri.go:89] found id: ""
	I0828 18:26:15.600682   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.600692   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:15.600699   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:15.600760   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:15.633089   77396 cri.go:89] found id: ""
	I0828 18:26:15.633122   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.633131   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:15.633137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:15.633186   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:15.667339   77396 cri.go:89] found id: ""
	I0828 18:26:15.667370   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.667382   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:15.667389   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:15.667451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:15.699463   77396 cri.go:89] found id: ""
	I0828 18:26:15.699499   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.699508   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:15.699513   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:15.699573   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:15.735841   77396 cri.go:89] found id: ""
	I0828 18:26:15.735866   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.735873   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:15.735879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:15.735929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:15.771111   77396 cri.go:89] found id: ""
	I0828 18:26:15.771135   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.771142   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:15.771148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:15.771198   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:15.804845   77396 cri.go:89] found id: ""
	I0828 18:26:15.804868   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.804875   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:15.804884   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:15.804894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:15.856744   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:15.856780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:15.869496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:15.869520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:15.938957   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:15.938982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:15.938998   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:16.016482   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:16.016525   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:18.554851   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:18.568241   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.568317   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.601401   77396 cri.go:89] found id: ""
	I0828 18:26:18.601439   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.601448   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:18.601454   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.601511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.634784   77396 cri.go:89] found id: ""
	I0828 18:26:18.634809   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.634816   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:18.634822   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.634875   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:18.666540   77396 cri.go:89] found id: ""
	I0828 18:26:18.666572   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.666584   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:18.666591   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:18.666643   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:18.699180   77396 cri.go:89] found id: ""
	I0828 18:26:18.699210   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.699221   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:18.699228   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:18.699289   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:18.735001   77396 cri.go:89] found id: ""
	I0828 18:26:18.735032   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.735042   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:18.735050   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:18.735116   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:18.767404   77396 cri.go:89] found id: ""
	I0828 18:26:18.767441   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.767454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:18.767472   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:18.767537   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:18.798857   77396 cri.go:89] found id: ""
	I0828 18:26:18.798881   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.798890   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:18.798896   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:18.798942   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:18.830113   77396 cri.go:89] found id: ""
	I0828 18:26:18.830137   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.830145   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:18.830153   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:18.830165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:18.843161   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:18.843188   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:18.910736   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:18.910760   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:18.910775   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:18.991698   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:18.991734   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.038896   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.038929   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:17.402193   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:18.902692   76486 pod_ready.go:82] duration metric: took 4m0.007006782s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:18.902716   76486 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:26:18.902724   76486 pod_ready.go:39] duration metric: took 4m4.058254547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:18.902739   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:18.902762   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.902819   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.954071   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:18.954115   76486 cri.go:89] found id: ""
	I0828 18:26:18.954123   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:18.954183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.958270   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.958345   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.994068   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:18.994105   76486 cri.go:89] found id: ""
	I0828 18:26:18.994116   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:18.994173   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.998807   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.998881   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:19.050622   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:19.050649   76486 cri.go:89] found id: ""
	I0828 18:26:19.050657   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:19.050738   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.055283   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:19.055340   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:19.093254   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.093280   76486 cri.go:89] found id: ""
	I0828 18:26:19.093288   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:19.093341   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.097062   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:19.097118   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:19.135962   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.135989   76486 cri.go:89] found id: ""
	I0828 18:26:19.135999   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:19.136046   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.140440   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:19.140510   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:19.176913   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.176942   76486 cri.go:89] found id: ""
	I0828 18:26:19.176951   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:19.177007   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.180742   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:19.180794   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:19.218796   76486 cri.go:89] found id: ""
	I0828 18:26:19.218821   76486 logs.go:276] 0 containers: []
	W0828 18:26:19.218832   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:19.218839   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:19.218898   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:19.253110   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:19.253134   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.253140   76486 cri.go:89] found id: ""
	I0828 18:26:19.253148   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:19.253205   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.257338   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.261148   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:19.261173   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.299620   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:19.299659   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.337533   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:19.337560   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:19.836298   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:19.836350   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.881132   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:19.881168   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.921986   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:19.922023   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.975419   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.975455   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:20.045848   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:20.045895   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:20.059683   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:20.059715   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:20.186442   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:20.186472   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:20.233152   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:20.233187   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:20.278546   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:20.278575   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:20.325985   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:20.326015   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:17.075945   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:19.076890   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:21.590663   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:21.602796   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:21.602860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:21.635583   77396 cri.go:89] found id: ""
	I0828 18:26:21.635612   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.635623   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:21.635631   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:21.635699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:21.666982   77396 cri.go:89] found id: ""
	I0828 18:26:21.667023   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.667034   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:21.667041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:21.667098   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:21.698817   77396 cri.go:89] found id: ""
	I0828 18:26:21.698851   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.698862   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:21.698870   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:21.698925   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:21.729618   77396 cri.go:89] found id: ""
	I0828 18:26:21.729645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.729654   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:21.729660   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:21.729718   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:21.763188   77396 cri.go:89] found id: ""
	I0828 18:26:21.763214   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.763222   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:21.763227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:21.763272   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:21.795613   77396 cri.go:89] found id: ""
	I0828 18:26:21.795645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.795656   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:21.795663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:21.795716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:21.828271   77396 cri.go:89] found id: ""
	I0828 18:26:21.828299   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.828308   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:21.828314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:21.828358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:21.860098   77396 cri.go:89] found id: ""
	I0828 18:26:21.860124   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.860132   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:21.860141   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:21.860155   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:21.908269   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:21.908308   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:21.921123   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:21.921149   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:21.985059   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:21.985078   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:21.985091   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:22.065705   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:22.065745   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:24.608061   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:24.621768   77396 kubeadm.go:597] duration metric: took 4m4.233964466s to restartPrimaryControlPlane
	W0828 18:26:24.621838   77396 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:24.621863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:22.860616   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:22.877760   76486 api_server.go:72] duration metric: took 4m15.760769788s to wait for apiserver process to appear ...
	I0828 18:26:22.877790   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:22.877829   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:22.877891   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:22.924739   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:22.924763   76486 cri.go:89] found id: ""
	I0828 18:26:22.924772   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:22.924831   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.928747   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:22.928810   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:22.967171   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:22.967193   76486 cri.go:89] found id: ""
	I0828 18:26:22.967200   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:22.967247   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.970989   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:22.971048   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:23.004804   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.004830   76486 cri.go:89] found id: ""
	I0828 18:26:23.004839   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:23.004895   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.008551   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:23.008616   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:23.041475   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.041496   76486 cri.go:89] found id: ""
	I0828 18:26:23.041504   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:23.041562   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.045265   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:23.045321   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:23.078749   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.078772   76486 cri.go:89] found id: ""
	I0828 18:26:23.078781   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:23.078827   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.082647   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:23.082712   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:23.117104   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.117128   76486 cri.go:89] found id: ""
	I0828 18:26:23.117138   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:23.117196   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.121011   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:23.121066   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:23.154564   76486 cri.go:89] found id: ""
	I0828 18:26:23.154592   76486 logs.go:276] 0 containers: []
	W0828 18:26:23.154614   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:23.154626   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:23.154689   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:23.192082   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.192101   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.192106   76486 cri.go:89] found id: ""
	I0828 18:26:23.192114   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:23.192175   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.196183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.199786   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:23.199814   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:23.241986   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:23.242019   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.276718   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:23.276750   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:23.353187   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:23.353224   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:23.366901   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:23.366937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.403147   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:23.403181   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.440461   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:23.440491   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.476039   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:23.476067   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.524702   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:23.524743   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.558484   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:23.558510   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:23.994897   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:23.994933   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:24.091558   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:24.091591   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:24.133767   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:24.133801   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:21.575113   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:23.576760   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:26.075770   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:27.939212   76435 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.201267084s)
	I0828 18:26:27.939337   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:27.964796   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:27.978456   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:27.988580   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:27.988599   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:27.988640   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.008900   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.008955   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.020342   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.032723   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.032784   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.049205   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.058740   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.058803   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.067969   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.078089   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.078145   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.086950   76435 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.136931   76435 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 18:26:28.137117   76435 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:26:28.249761   76435 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:26:28.249900   76435 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:26:28.250020   76435 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 18:26:28.258994   76435 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:26:28.261527   76435 out.go:235]   - Generating certificates and keys ...
	I0828 18:26:28.261644   76435 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:26:28.261732   76435 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:26:28.261848   76435 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:26:28.261939   76435 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:26:28.262038   76435 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:26:28.262155   76435 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:26:28.262254   76435 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:26:28.262338   76435 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:26:28.262452   76435 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:26:28.262557   76435 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:26:28.262635   76435 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:26:28.262731   76435 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:26:28.434898   76435 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:26:28.833039   76435 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 18:26:28.930840   76435 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:26:29.103123   76435 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:26:29.201561   76435 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:26:29.202039   76435 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:26:29.204545   76435 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:26:28.691092   77396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.069202982s)
	I0828 18:26:28.691158   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:28.705352   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:28.715421   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:28.724698   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:28.724718   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:28.724771   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.733594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.733676   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.742759   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.752127   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.752187   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.761279   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.770451   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.770518   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.779635   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.788337   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.788405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.797794   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.997476   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:26.682052   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:26:26.687081   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:26:26.687992   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:26.688008   76486 api_server.go:131] duration metric: took 3.810212378s to wait for apiserver health ...
	I0828 18:26:26.688016   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:26.688038   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:26.688084   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:26.729049   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:26.729072   76486 cri.go:89] found id: ""
	I0828 18:26:26.729080   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:26.729127   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.733643   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:26.733710   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:26.774655   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:26.774675   76486 cri.go:89] found id: ""
	I0828 18:26:26.774682   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:26.774732   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.778654   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:26.778704   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:26.812844   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:26.812870   76486 cri.go:89] found id: ""
	I0828 18:26:26.812878   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:26.812928   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.816783   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:26.816847   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:26.856925   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:26.856945   76486 cri.go:89] found id: ""
	I0828 18:26:26.856957   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:26.857013   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.860845   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:26.860906   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:26.893850   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:26.893873   76486 cri.go:89] found id: ""
	I0828 18:26:26.893882   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:26.893940   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.897799   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:26.897875   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:26.932914   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:26.932936   76486 cri.go:89] found id: ""
	I0828 18:26:26.932942   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:26.932993   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.937185   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:26.937253   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:26.980339   76486 cri.go:89] found id: ""
	I0828 18:26:26.980368   76486 logs.go:276] 0 containers: []
	W0828 18:26:26.980379   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:26.980386   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:26.980458   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:27.014870   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.014889   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.014893   76486 cri.go:89] found id: ""
	I0828 18:26:27.014899   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:27.014954   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.018782   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.022146   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:27.022167   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:27.062244   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:27.062271   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:27.097495   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:27.097528   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:27.150300   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:27.150342   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.183651   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:27.183680   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.217641   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:27.217666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:27.286627   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:27.286666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:27.300486   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:27.300514   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:27.409150   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:27.409183   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:27.791378   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:27.791425   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:27.842764   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:27.842799   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:27.892361   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:27.892393   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:27.926469   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:27.926497   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:30.478530   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:26:30.478568   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.478576   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.478583   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.478589   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.478595   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.478608   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.478619   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.478627   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.478637   76486 system_pods.go:74] duration metric: took 3.79061533s to wait for pod list to return data ...
	I0828 18:26:30.478648   76486 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:30.482479   76486 default_sa.go:45] found service account: "default"
	I0828 18:26:30.482507   76486 default_sa.go:55] duration metric: took 3.852493ms for default service account to be created ...
	I0828 18:26:30.482517   76486 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:30.488974   76486 system_pods.go:86] 8 kube-system pods found
	I0828 18:26:30.489014   76486 system_pods.go:89] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.489023   76486 system_pods.go:89] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.489030   76486 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.489038   76486 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.489044   76486 system_pods.go:89] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.489050   76486 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.489062   76486 system_pods.go:89] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.489069   76486 system_pods.go:89] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.489092   76486 system_pods.go:126] duration metric: took 6.568035ms to wait for k8s-apps to be running ...
	I0828 18:26:30.489104   76486 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:30.489163   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:30.508336   76486 system_svc.go:56] duration metric: took 19.222473ms WaitForService to wait for kubelet
	I0828 18:26:30.508369   76486 kubeadm.go:582] duration metric: took 4m23.39138334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:30.508394   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:30.512219   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:30.512253   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:30.512267   76486 node_conditions.go:105] duration metric: took 3.866556ms to run NodePressure ...
	I0828 18:26:30.512282   76486 start.go:241] waiting for startup goroutines ...
	I0828 18:26:30.512291   76486 start.go:246] waiting for cluster config update ...
	I0828 18:26:30.512306   76486 start.go:255] writing updated cluster config ...
	I0828 18:26:30.512681   76486 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:30.579402   76486 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:30.581444   76486 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-640552" cluster and "default" namespace by default
	I0828 18:26:28.575075   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:30.576207   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:29.206147   76435 out.go:235]   - Booting up control plane ...
	I0828 18:26:29.206257   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:26:29.206365   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:26:29.206494   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:26:29.227031   76435 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:26:29.235149   76435 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:26:29.235246   76435 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:26:29.370272   76435 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 18:26:29.370462   76435 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 18:26:29.872896   76435 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733105ms
	I0828 18:26:29.872975   76435 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 18:26:34.877604   76435 kubeadm.go:310] [api-check] The API server is healthy after 5.002276684s
	I0828 18:26:34.892462   76435 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 18:26:34.905804   76435 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 18:26:34.932862   76435 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 18:26:34.933079   76435 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-014980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 18:26:34.944560   76435 kubeadm.go:310] [bootstrap-token] Using token: nwgkdo.9yj47woyyi233z66
	I0828 18:26:34.945933   76435 out.go:235]   - Configuring RBAC rules ...
	I0828 18:26:34.946052   76435 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 18:26:34.951430   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 18:26:34.963862   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 18:26:34.968038   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 18:26:34.971350   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 18:26:34.977521   76435 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 18:26:35.282249   76435 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 18:26:35.704101   76435 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 18:26:36.282971   76435 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 18:26:36.284216   76435 kubeadm.go:310] 
	I0828 18:26:36.284337   76435 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 18:26:36.284364   76435 kubeadm.go:310] 
	I0828 18:26:36.284457   76435 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 18:26:36.284470   76435 kubeadm.go:310] 
	I0828 18:26:36.284504   76435 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 18:26:36.284579   76435 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 18:26:36.284654   76435 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 18:26:36.284667   76435 kubeadm.go:310] 
	I0828 18:26:36.284748   76435 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 18:26:36.284758   76435 kubeadm.go:310] 
	I0828 18:26:36.284820   76435 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 18:26:36.284826   76435 kubeadm.go:310] 
	I0828 18:26:36.284891   76435 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 18:26:36.284988   76435 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 18:26:36.285081   76435 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 18:26:36.285091   76435 kubeadm.go:310] 
	I0828 18:26:36.285197   76435 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 18:26:36.285298   76435 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 18:26:36.285309   76435 kubeadm.go:310] 
	I0828 18:26:36.285414   76435 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285549   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 18:26:36.285572   76435 kubeadm.go:310] 	--control-plane 
	I0828 18:26:36.285577   76435 kubeadm.go:310] 
	I0828 18:26:36.285655   76435 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 18:26:36.285663   76435 kubeadm.go:310] 
	I0828 18:26:36.285757   76435 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285886   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 18:26:36.287195   76435 kubeadm.go:310] W0828 18:26:28.113155    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287529   76435 kubeadm.go:310] W0828 18:26:28.114038    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287633   76435 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:36.287659   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:26:36.287669   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:26:36.289019   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:26:33.075886   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:35.076651   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:36.290213   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:26:36.302171   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:26:36.326384   76435 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:26:36.326452   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:36.326522   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-014980 minikube.k8s.io/updated_at=2024_08_28T18_26_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=embed-certs-014980 minikube.k8s.io/primary=true
	I0828 18:26:36.537331   76435 ops.go:34] apiserver oom_adj: -16
	I0828 18:26:36.537497   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.038467   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.537529   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.038147   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.537854   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.038193   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.538325   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.037978   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.537503   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.038001   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.160327   76435 kubeadm.go:1113] duration metric: took 4.83392727s to wait for elevateKubeSystemPrivileges
	I0828 18:26:41.160366   76435 kubeadm.go:394] duration metric: took 5m2.080700509s to StartCluster
	I0828 18:26:41.160386   76435 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.160469   76435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:26:41.162122   76435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.162393   76435 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:26:41.162463   76435 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:26:41.162547   76435 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-014980"
	I0828 18:26:41.162563   76435 addons.go:69] Setting default-storageclass=true in profile "embed-certs-014980"
	I0828 18:26:41.162588   76435 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-014980"
	I0828 18:26:41.162586   76435 addons.go:69] Setting metrics-server=true in profile "embed-certs-014980"
	W0828 18:26:41.162599   76435 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:26:41.162610   76435 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-014980"
	I0828 18:26:41.162632   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162635   76435 addons.go:234] Setting addon metrics-server=true in "embed-certs-014980"
	W0828 18:26:41.162644   76435 addons.go:243] addon metrics-server should already be in state true
	I0828 18:26:41.162666   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162612   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:26:41.163042   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163054   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163083   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163095   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163140   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163160   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.164216   76435 out.go:177] * Verifying Kubernetes components...
	I0828 18:26:41.166298   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:26:41.178807   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0828 18:26:41.178914   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0828 18:26:41.179437   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179515   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179971   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.179994   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180168   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.180197   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180346   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180629   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180982   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181021   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.181761   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181810   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.182920   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
	I0828 18:26:41.183394   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.183877   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.183900   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.184252   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.184450   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.187788   76435 addons.go:234] Setting addon default-storageclass=true in "embed-certs-014980"
	W0828 18:26:41.187811   76435 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:26:41.187837   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.188210   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.188242   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.199469   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I0828 18:26:41.199977   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.200461   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.200487   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.200894   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.201121   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.201369   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0828 18:26:41.201749   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.202224   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.202243   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.202811   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.203024   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.203030   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.205127   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.205217   76435 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:26:41.206606   76435 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.206620   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:26:41.206633   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.206678   76435 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:26:37.575308   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:39.575726   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:41.207928   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:26:41.207951   76435 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:26:41.207971   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.208651   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0828 18:26:41.209208   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.210020   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.210040   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.210477   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.210537   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211056   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211089   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211123   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211166   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211313   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.211443   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.211572   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211588   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211580   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.211600   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.211636   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.211828   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211996   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.212159   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.212271   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.228122   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
	I0828 18:26:41.228552   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.229000   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.229016   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.229309   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.229565   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.231484   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.231721   76435 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.231732   76435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:26:41.231744   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.234525   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.234901   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.234933   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.235097   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.235259   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.235412   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.235585   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.375620   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:26:41.420534   76435 node_ready.go:35] waiting up to 6m0s for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429069   76435 node_ready.go:49] node "embed-certs-014980" has status "Ready":"True"
	I0828 18:26:41.429090   76435 node_ready.go:38] duration metric: took 8.530462ms for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429098   76435 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:41.438842   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:41.484936   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.535672   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.536914   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:26:41.536936   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:26:41.604181   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:26:41.604219   76435 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:26:41.654668   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.654695   76435 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:26:41.688039   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.921155   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921188   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921465   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:41.921544   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.921568   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921577   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921842   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921863   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.938676   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.938694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.938984   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.939034   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690412   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154689373s)
	I0828 18:26:42.690461   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690469   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.690766   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.690810   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690830   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690843   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.691076   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.691114   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.691122   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.722795   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.034719218s)
	I0828 18:26:42.722840   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.722852   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723141   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.723210   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723231   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723249   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.723261   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723539   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723567   76435 addons.go:475] Verifying addon metrics-server=true in "embed-certs-014980"
	I0828 18:26:42.725524   76435 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0828 18:26:42.726507   76435 addons.go:510] duration metric: took 1.564045136s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0828 18:26:41.576259   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:44.075008   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:46.075323   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:43.445262   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:45.445672   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:47.948313   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:48.446506   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.446527   76435 pod_ready.go:82] duration metric: took 7.007660638s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.446538   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451954   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.451973   76435 pod_ready.go:82] duration metric: took 5.430099ms for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451983   76435 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456910   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.456937   76435 pod_ready.go:82] duration metric: took 4.947692ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456948   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963231   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.963252   76435 pod_ready.go:82] duration metric: took 1.506296167s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963262   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967762   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.967780   76435 pod_ready.go:82] duration metric: took 4.511839ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967788   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043820   76435 pod_ready.go:93] pod "kube-proxy-hzw4m" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.043844   76435 pod_ready.go:82] duration metric: took 76.049661ms for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043855   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443261   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.443288   76435 pod_ready.go:82] duration metric: took 399.423823ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443298   76435 pod_ready.go:39] duration metric: took 9.014190636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:50.443315   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:50.443375   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:50.459400   76435 api_server.go:72] duration metric: took 9.296966752s to wait for apiserver process to appear ...
	I0828 18:26:50.459426   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:50.459448   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:26:50.463861   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:26:50.464779   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:50.464807   76435 api_server.go:131] duration metric: took 5.370633ms to wait for apiserver health ...
	I0828 18:26:50.464817   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:50.645588   76435 system_pods.go:59] 9 kube-system pods found
	I0828 18:26:50.645620   76435 system_pods.go:61] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:50.645626   76435 system_pods.go:61] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:50.645629   76435 system_pods.go:61] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:50.645633   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:50.645636   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:50.645639   76435 system_pods.go:61] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:50.645642   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:50.645647   76435 system_pods.go:61] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:50.645651   76435 system_pods.go:61] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:50.645658   76435 system_pods.go:74] duration metric: took 180.831741ms to wait for pod list to return data ...
	I0828 18:26:50.645664   76435 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:50.844171   76435 default_sa.go:45] found service account: "default"
	I0828 18:26:50.844205   76435 default_sa.go:55] duration metric: took 198.534118ms for default service account to be created ...
	I0828 18:26:50.844217   76435 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:51.045810   76435 system_pods.go:86] 9 kube-system pods found
	I0828 18:26:51.045839   76435 system_pods.go:89] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:51.045844   76435 system_pods.go:89] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:51.045848   76435 system_pods.go:89] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:51.045852   76435 system_pods.go:89] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:51.045856   76435 system_pods.go:89] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:51.045859   76435 system_pods.go:89] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:51.045865   76435 system_pods.go:89] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:51.045871   76435 system_pods.go:89] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:51.045874   76435 system_pods.go:89] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:51.045882   76435 system_pods.go:126] duration metric: took 201.659747ms to wait for k8s-apps to be running ...
	I0828 18:26:51.045889   76435 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:51.045930   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:51.060123   76435 system_svc.go:56] duration metric: took 14.22252ms WaitForService to wait for kubelet
	I0828 18:26:51.060159   76435 kubeadm.go:582] duration metric: took 9.897729666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:51.060184   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:51.244017   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:51.244042   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:51.244052   76435 node_conditions.go:105] duration metric: took 183.862561ms to run NodePressure ...
	I0828 18:26:51.244063   76435 start.go:241] waiting for startup goroutines ...
	I0828 18:26:51.244069   76435 start.go:246] waiting for cluster config update ...
	I0828 18:26:51.244080   76435 start.go:255] writing updated cluster config ...
	I0828 18:26:51.244398   76435 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:51.291241   76435 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:51.293227   76435 out.go:177] * Done! kubectl is now configured to use "embed-certs-014980" cluster and "default" namespace by default
	I0828 18:26:48.075513   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:50.576810   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:53.075100   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:55.075381   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:57.076055   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:59.575251   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:01.575306   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:04.075576   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.076392   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.575514   75908 pod_ready.go:82] duration metric: took 4m0.006537109s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:27:06.575539   75908 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:27:06.575549   75908 pod_ready.go:39] duration metric: took 4m3.208242253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:27:06.575566   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:27:06.575596   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:06.575649   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:06.625222   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:06.625247   75908 cri.go:89] found id: ""
	I0828 18:27:06.625257   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:06.625317   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.629941   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:06.630003   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:06.665372   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:06.665400   75908 cri.go:89] found id: ""
	I0828 18:27:06.665410   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:06.665472   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.669511   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:06.669599   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:06.709706   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:06.709734   75908 cri.go:89] found id: ""
	I0828 18:27:06.709742   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:06.709801   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.713964   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:06.714023   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:06.748110   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:06.748136   75908 cri.go:89] found id: ""
	I0828 18:27:06.748158   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:06.748217   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.752020   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:06.752087   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:06.788455   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:06.788476   75908 cri.go:89] found id: ""
	I0828 18:27:06.788483   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:06.788537   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.792710   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:06.792779   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:06.830031   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:06.830055   75908 cri.go:89] found id: ""
	I0828 18:27:06.830065   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:06.830147   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.833910   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:06.833970   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:06.869172   75908 cri.go:89] found id: ""
	I0828 18:27:06.869199   75908 logs.go:276] 0 containers: []
	W0828 18:27:06.869210   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:06.869217   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:06.869281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:06.906605   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:06.906626   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:06.906632   75908 cri.go:89] found id: ""
	I0828 18:27:06.906644   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:06.906705   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.911374   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.915494   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:06.915515   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:06.961094   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:06.961128   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:07.018511   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:07.018543   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:07.058413   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:07.058443   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:07.098028   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:07.098055   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:07.136706   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:07.136731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:07.203021   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:07.203059   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:07.239714   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:07.239758   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:07.746282   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:07.746326   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:07.812731   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:07.812771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:07.828453   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:07.828484   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:07.967513   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:07.967610   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:08.013719   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:08.013745   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.553418   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:27:10.569945   75908 api_server.go:72] duration metric: took 4m14.476728398s to wait for apiserver process to appear ...
	I0828 18:27:10.569977   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:27:10.570010   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:10.570057   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:10.605869   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:10.605899   75908 cri.go:89] found id: ""
	I0828 18:27:10.605908   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:10.606013   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.609868   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:10.609949   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:10.647627   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:10.647655   75908 cri.go:89] found id: ""
	I0828 18:27:10.647664   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:10.647721   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.651916   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:10.651980   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:10.690782   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:10.690805   75908 cri.go:89] found id: ""
	I0828 18:27:10.690815   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:10.690870   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.694896   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:10.694944   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:10.735502   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:10.735530   75908 cri.go:89] found id: ""
	I0828 18:27:10.735541   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:10.735603   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.739627   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:10.739702   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:10.776213   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:10.776233   75908 cri.go:89] found id: ""
	I0828 18:27:10.776240   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:10.776293   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.779889   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:10.779948   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:10.815919   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:10.815949   75908 cri.go:89] found id: ""
	I0828 18:27:10.815958   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:10.816022   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.820317   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:10.820385   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:10.859049   75908 cri.go:89] found id: ""
	I0828 18:27:10.859077   75908 logs.go:276] 0 containers: []
	W0828 18:27:10.859085   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:10.859091   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:10.859138   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:10.894511   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.894543   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.894549   75908 cri.go:89] found id: ""
	I0828 18:27:10.894558   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:10.894616   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.899725   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.907315   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:10.907339   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.941374   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:10.941401   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:11.372069   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:11.372111   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:11.425168   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:11.425192   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:11.439748   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:11.439771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:11.484252   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:11.484278   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:11.522975   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:11.523000   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:11.590753   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:11.590797   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:11.629694   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:11.629725   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:11.667597   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:11.667627   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:11.732423   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:11.732469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:11.841885   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:11.841929   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:11.885703   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:11.885741   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.428276   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:27:14.433359   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:27:14.434430   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:27:14.434448   75908 api_server.go:131] duration metric: took 3.864464723s to wait for apiserver health ...
	I0828 18:27:14.434458   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:27:14.434487   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:14.434545   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:14.472125   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.472153   75908 cri.go:89] found id: ""
	I0828 18:27:14.472163   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:14.472225   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.476217   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:14.476281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:14.514886   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:14.514904   75908 cri.go:89] found id: ""
	I0828 18:27:14.514911   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:14.514965   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.518930   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:14.519000   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:14.556279   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.556302   75908 cri.go:89] found id: ""
	I0828 18:27:14.556311   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:14.556356   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.560542   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:14.560612   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:14.604981   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:14.605008   75908 cri.go:89] found id: ""
	I0828 18:27:14.605017   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:14.605076   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.608769   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:14.608833   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:14.644014   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:14.644036   75908 cri.go:89] found id: ""
	I0828 18:27:14.644044   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:14.644089   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.648138   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:14.648211   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:14.686898   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:14.686919   75908 cri.go:89] found id: ""
	I0828 18:27:14.686926   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:14.686971   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.690752   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:14.690818   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:14.723146   75908 cri.go:89] found id: ""
	I0828 18:27:14.723174   75908 logs.go:276] 0 containers: []
	W0828 18:27:14.723185   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:14.723200   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:14.723264   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:14.758168   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.758196   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:14.758202   75908 cri.go:89] found id: ""
	I0828 18:27:14.758212   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:14.758269   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.761928   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.765388   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:14.765407   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.798567   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:14.798598   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:14.841992   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:14.842024   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:14.947020   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:14.947050   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.996788   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:14.996815   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:15.031706   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:15.031731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:15.065813   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:15.065839   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:15.121439   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:15.121469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:15.535661   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:15.535709   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:15.603334   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:15.603374   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:15.619628   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:15.619657   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:15.661179   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:15.661203   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:15.697954   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:15.697983   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:18.238105   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:27:18.238137   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.238144   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.238149   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.238154   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.238158   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.238163   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.238171   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.238177   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.238187   75908 system_pods.go:74] duration metric: took 3.803722719s to wait for pod list to return data ...
	I0828 18:27:18.238198   75908 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:27:18.240936   75908 default_sa.go:45] found service account: "default"
	I0828 18:27:18.240955   75908 default_sa.go:55] duration metric: took 2.749733ms for default service account to be created ...
	I0828 18:27:18.240963   75908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:27:18.245768   75908 system_pods.go:86] 8 kube-system pods found
	I0828 18:27:18.245793   75908 system_pods.go:89] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.245800   75908 system_pods.go:89] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.245806   75908 system_pods.go:89] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.245810   75908 system_pods.go:89] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.245815   75908 system_pods.go:89] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.245820   75908 system_pods.go:89] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.245829   75908 system_pods.go:89] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.245838   75908 system_pods.go:89] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.245851   75908 system_pods.go:126] duration metric: took 4.881291ms to wait for k8s-apps to be running ...
	I0828 18:27:18.245862   75908 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:27:18.245909   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:27:18.260429   75908 system_svc.go:56] duration metric: took 14.56108ms WaitForService to wait for kubelet
	I0828 18:27:18.260458   75908 kubeadm.go:582] duration metric: took 4m22.167245383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:27:18.260489   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:27:18.262765   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:27:18.262784   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:27:18.262793   75908 node_conditions.go:105] duration metric: took 2.299468ms to run NodePressure ...
	I0828 18:27:18.262803   75908 start.go:241] waiting for startup goroutines ...
	I0828 18:27:18.262810   75908 start.go:246] waiting for cluster config update ...
	I0828 18:27:18.262820   75908 start.go:255] writing updated cluster config ...
	I0828 18:27:18.263070   75908 ssh_runner.go:195] Run: rm -f paused
	I0828 18:27:18.312755   75908 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:27:18.314827   75908 out.go:177] * Done! kubectl is now configured to use "no-preload-072854" cluster and "default" namespace by default
	I0828 18:28:25.556329   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:28:25.556449   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:28:25.558031   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:28:25.558117   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:28:25.558222   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:28:25.558363   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:28:25.558517   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:28:25.558594   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:28:25.561046   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:28:25.561124   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:28:25.561179   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:28:25.561288   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:28:25.561384   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:28:25.561489   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:28:25.561562   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:28:25.561797   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:28:25.561914   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:28:25.562010   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:28:25.562230   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:28:25.562294   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:28:25.562402   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:28:25.562478   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:28:25.562554   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:28:25.562706   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:28:25.562818   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:28:25.562926   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:28:25.563006   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:28:25.563043   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:28:25.563144   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:28:25.564527   77396 out.go:235]   - Booting up control plane ...
	I0828 18:28:25.564629   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:28:25.564716   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:28:25.564816   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:28:25.564929   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:28:25.565154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:28:25.565226   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:28:25.565326   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565541   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.565660   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565895   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566002   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566184   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566245   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566411   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566473   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566629   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566636   77396 kubeadm.go:310] 
	I0828 18:28:25.566672   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:28:25.566706   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:28:25.566712   77396 kubeadm.go:310] 
	I0828 18:28:25.566740   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:28:25.566769   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:28:25.566881   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:28:25.566893   77396 kubeadm.go:310] 
	I0828 18:28:25.567033   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:28:25.567080   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:28:25.567126   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:28:25.567142   77396 kubeadm.go:310] 
	I0828 18:28:25.567276   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:28:25.567351   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:28:25.567358   77396 kubeadm.go:310] 
	I0828 18:28:25.567461   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:28:25.567534   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:28:25.567612   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:28:25.567689   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:28:25.567726   77396 kubeadm.go:310] 
	W0828 18:28:25.567820   77396 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0828 18:28:25.567858   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:28:26.036779   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:28:26.051771   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:28:26.060912   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:28:26.060932   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:28:26.060971   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:28:26.069420   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:28:26.069486   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:28:26.078268   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:28:26.086594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:28:26.086669   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:28:26.095756   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.104747   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:28:26.104809   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.113847   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:28:26.122600   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:28:26.122673   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:28:26.131697   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:28:26.338828   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:30:22.315132   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:30:22.315271   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:30:22.316887   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:30:22.316970   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:30:22.317067   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:30:22.317199   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:30:22.317289   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:30:22.317340   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:30:22.319318   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:30:22.319406   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:30:22.319461   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:30:22.319540   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:30:22.319620   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:30:22.319715   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:30:22.319791   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:30:22.319888   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:30:22.319972   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:30:22.320068   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:30:22.320161   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:30:22.320232   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:30:22.320312   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:30:22.320362   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:30:22.320411   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:30:22.320468   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:30:22.320511   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:30:22.320627   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:30:22.320748   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:30:22.320805   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:30:22.320922   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:30:22.322522   77396 out.go:235]   - Booting up control plane ...
	I0828 18:30:22.322640   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:30:22.322739   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:30:22.322843   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:30:22.322939   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:30:22.323154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:30:22.323234   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:30:22.323320   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323518   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323616   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323851   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323947   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324157   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324215   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324383   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324448   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324605   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324614   77396 kubeadm.go:310] 
	I0828 18:30:22.324651   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:30:22.324685   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:30:22.324694   77396 kubeadm.go:310] 
	I0828 18:30:22.324726   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:30:22.324755   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:30:22.324846   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:30:22.324853   77396 kubeadm.go:310] 
	I0828 18:30:22.324939   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:30:22.324971   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:30:22.325003   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:30:22.325009   77396 kubeadm.go:310] 
	I0828 18:30:22.325137   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:30:22.325259   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:30:22.325271   77396 kubeadm.go:310] 
	I0828 18:30:22.325394   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:30:22.325485   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:30:22.325599   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:30:22.325707   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:30:22.325725   77396 kubeadm.go:310] 
	I0828 18:30:22.325793   77396 kubeadm.go:394] duration metric: took 8m1.985321645s to StartCluster
	I0828 18:30:22.325845   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:30:22.325912   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:30:22.369637   77396 cri.go:89] found id: ""
	I0828 18:30:22.369669   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.369680   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:30:22.369687   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:30:22.369748   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:30:22.404363   77396 cri.go:89] found id: ""
	I0828 18:30:22.404395   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.404404   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:30:22.404412   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:30:22.404477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:30:22.439923   77396 cri.go:89] found id: ""
	I0828 18:30:22.439949   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.439956   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:30:22.439962   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:30:22.440016   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:30:22.480139   77396 cri.go:89] found id: ""
	I0828 18:30:22.480169   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.480186   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:30:22.480195   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:30:22.480255   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:30:22.517020   77396 cri.go:89] found id: ""
	I0828 18:30:22.517053   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.517064   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:30:22.517075   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:30:22.517151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:30:22.551369   77396 cri.go:89] found id: ""
	I0828 18:30:22.551391   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.551399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:30:22.551409   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:30:22.551458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:30:22.585656   77396 cri.go:89] found id: ""
	I0828 18:30:22.585686   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.585697   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:30:22.585704   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:30:22.585781   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:30:22.620157   77396 cri.go:89] found id: ""
	I0828 18:30:22.620190   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.620201   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:30:22.620212   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:30:22.620230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:30:22.634209   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:30:22.634245   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:30:22.711047   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:30:22.711082   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:30:22.711096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:30:22.816037   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:30:22.816075   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:30:22.885999   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:30:22.886029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 18:30:22.936793   77396 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0828 18:30:22.936856   77396 out.go:270] * 
	W0828 18:30:22.936920   77396 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.936941   77396 out.go:270] * 
	W0828 18:30:22.937749   77396 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:30:22.941026   77396 out.go:201] 
	W0828 18:30:22.942189   77396 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.942300   77396 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0828 18:30:22.942335   77396 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0828 18:30:22.943829   77396 out.go:201] 
	
	
	==> CRI-O <==
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.801731862Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724869824801709240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe9c1090-945f-4359-81cc-8bbea11aed40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.802249792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dadf5de9-2aa7-4c46-b02f-bd633719043d name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.802315672Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dadf5de9-2aa7-4c46-b02f-bd633719043d name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.802349682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dadf5de9-2aa7-4c46-b02f-bd633719043d name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.833721799Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=83921cfc-03c6-45a7-831e-1da101734fcc name=/runtime.v1.RuntimeService/Version
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.833814205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=83921cfc-03c6-45a7-831e-1da101734fcc name=/runtime.v1.RuntimeService/Version
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.834834239Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=738cb38f-87d8-4dbd-96fd-6c905464c9ac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.835231822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724869824835206586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=738cb38f-87d8-4dbd-96fd-6c905464c9ac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.835745012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7a06519-de86-44ed-8954-1b7f7e074854 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.835816469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7a06519-de86-44ed-8954-1b7f7e074854 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.835852726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c7a06519-de86-44ed-8954-1b7f7e074854 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.865984227Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9f8e689-dba5-401c-9afa-0bf9fb17e325 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.866074386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9f8e689-dba5-401c-9afa-0bf9fb17e325 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.867336022Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b73ac02c-da5b-407c-a76c-15ebe7e0e209 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.867738738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724869824867717647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b73ac02c-da5b-407c-a76c-15ebe7e0e209 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.868489896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51cf352f-505b-4363-8dc8-ecc53d90baf2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.868553956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51cf352f-505b-4363-8dc8-ecc53d90baf2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.868595650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=51cf352f-505b-4363-8dc8-ecc53d90baf2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.899954232Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb1b7ec6-bba1-4c89-90bb-e2489f808067 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.900042057Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb1b7ec6-bba1-4c89-90bb-e2489f808067 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.901123916Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=201a76c7-5000-4113-965d-56bd279dc06a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.901698275Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724869824901661311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=201a76c7-5000-4113-965d-56bd279dc06a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.902267387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab33637b-bd0a-4961-a0db-dc3ebb75db0f name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.902350501Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab33637b-bd0a-4961-a0db-dc3ebb75db0f name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:30:24 old-k8s-version-131737 crio[633]: time="2024-08-28 18:30:24.902460709Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ab33637b-bd0a-4961-a0db-dc3ebb75db0f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug28 18:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053841] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038492] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.861305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug28 18:22] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.351947] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.186067] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.056442] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067838] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.210439] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.181798] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.238436] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.531745] systemd-fstab-generator[889]: Ignoring "noauto" option for root device
	[  +0.068173] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.717012] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[ +12.982776] kauditd_printk_skb: 46 callbacks suppressed
	[Aug28 18:26] systemd-fstab-generator[5132]: Ignoring "noauto" option for root device
	[Aug28 18:28] systemd-fstab-generator[5416]: Ignoring "noauto" option for root device
	[  +0.064360] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:30:25 up 8 min,  0 users,  load average: 0.10, 0.20, 0.12
	Linux old-k8s-version-131737 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:218
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]: goroutine 146 [runnable]:
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000c0e380)
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]: goroutine 147 [select]:
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc00064edc0, 0xc0001cb001, 0xc0004eea80, 0xc0003836f0, 0xc0003b9580, 0xc0003b9540)
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001cb020, 0x0, 0x0)
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000c0e380)
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5598]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 28 18:30:22 old-k8s-version-131737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 28 18:30:22 old-k8s-version-131737 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 28 18:30:22 old-k8s-version-131737 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5656]: I0828 18:30:22.864653    5656 server.go:416] Version: v1.20.0
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5656]: I0828 18:30:22.864903    5656 server.go:837] Client rotation is on, will bootstrap in background
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5656]: I0828 18:30:22.866959    5656 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5656]: I0828 18:30:22.868379    5656 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 28 18:30:22 old-k8s-version-131737 kubelet[5656]: W0828 18:30:22.868549    5656 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131737 -n old-k8s-version-131737
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 2 (225.266041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-131737" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (701.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-28 18:35:31.163769394 +0000 UTC m=+6250.214331376
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-640552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-640552 logs -n 25: (2.056796303s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo find                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo crio                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-647068                                       | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:14 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-072854             | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-014980            | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-640552  | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-072854                  | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC | 28 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-131737        | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-014980                 | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-640552       | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-131737             | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 18:18:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 18:18:45.197319   77396 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:18:45.197606   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197616   77396 out.go:358] Setting ErrFile to fd 2...
	I0828 18:18:45.197621   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197793   77396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:18:45.198351   77396 out.go:352] Setting JSON to false
	I0828 18:18:45.199218   77396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7271,"bootTime":1724861854,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:18:45.199316   77396 start.go:139] virtualization: kvm guest
	I0828 18:18:45.201168   77396 out.go:177] * [old-k8s-version-131737] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:18:45.202252   77396 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:18:45.202312   77396 notify.go:220] Checking for updates...
	I0828 18:18:45.204563   77396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:18:45.205713   77396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:18:45.206652   77396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:18:45.207806   77396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:18:45.208891   77396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:18:45.210308   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:18:45.210717   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.210780   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.225409   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0828 18:18:45.225806   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.226318   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.226338   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.226722   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.226903   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.228685   77396 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 18:18:45.229863   77396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:18:45.230199   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.230243   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.245150   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0828 18:18:45.245641   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.246164   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.246199   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.246486   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.246677   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.282499   77396 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 18:18:45.283789   77396 start.go:297] selected driver: kvm2
	I0828 18:18:45.283804   77396 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.283918   77396 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:18:45.284594   77396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.284693   77396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:18:45.299887   77396 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:18:45.300236   77396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:18:45.300266   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:18:45.300274   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:18:45.300308   77396 start.go:340] cluster config:
	{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.300419   77396 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.302883   77396 out.go:177] * Starting "old-k8s-version-131737" primary control-plane node in "old-k8s-version-131737" cluster
	I0828 18:18:41.610368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:44.682293   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:45.304152   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:18:45.304189   77396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:18:45.304208   77396 cache.go:56] Caching tarball of preloaded images
	I0828 18:18:45.304295   77396 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:18:45.304305   77396 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0828 18:18:45.304426   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:18:45.304608   77396 start.go:360] acquireMachinesLock for old-k8s-version-131737: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:18:50.762367   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:53.834404   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:59.914331   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:02.986351   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:09.066375   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:12.138382   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:18.218324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:21.290324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:27.370327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:30.442342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:36.522377   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:39.594396   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:45.674327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:48.746316   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:54.826346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:57.898388   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:03.978342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:07.050322   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:13.130368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:16.202305   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:22.282326   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:25.354374   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:31.434381   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:34.506312   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:40.586353   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:43.658361   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:49.738343   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:52.810329   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:58.890346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:01.962342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:08.042323   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:11.114385   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:14.118406   76435 start.go:364] duration metric: took 4m0.584080771s to acquireMachinesLock for "embed-certs-014980"
	I0828 18:21:14.118470   76435 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:14.118492   76435 fix.go:54] fixHost starting: 
	I0828 18:21:14.118808   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:14.118834   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:14.134434   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0828 18:21:14.134863   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:14.135369   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:21:14.135398   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:14.135717   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:14.135891   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:14.136052   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:21:14.137681   76435 fix.go:112] recreateIfNeeded on embed-certs-014980: state=Stopped err=<nil>
	I0828 18:21:14.137705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	W0828 18:21:14.137861   76435 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:14.139602   76435 out.go:177] * Restarting existing kvm2 VM for "embed-certs-014980" ...
	I0828 18:21:14.116153   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:14.116188   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116549   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:21:14.116581   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116758   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:21:14.118261   75908 machine.go:96] duration metric: took 4m37.42460751s to provisionDockerMachine
	I0828 18:21:14.118302   75908 fix.go:56] duration metric: took 4m37.4457415s for fixHost
	I0828 18:21:14.118309   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 4m37.445770955s
	W0828 18:21:14.118326   75908 start.go:714] error starting host: provision: host is not running
	W0828 18:21:14.118418   75908 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0828 18:21:14.118430   75908 start.go:729] Will try again in 5 seconds ...
	I0828 18:21:14.140812   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Start
	I0828 18:21:14.140967   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring networks are active...
	I0828 18:21:14.141716   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network default is active
	I0828 18:21:14.142021   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network mk-embed-certs-014980 is active
	I0828 18:21:14.142397   76435 main.go:141] libmachine: (embed-certs-014980) Getting domain xml...
	I0828 18:21:14.143109   76435 main.go:141] libmachine: (embed-certs-014980) Creating domain...
	I0828 18:21:15.352062   76435 main.go:141] libmachine: (embed-certs-014980) Waiting to get IP...
	I0828 18:21:15.352991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.353345   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.353418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.353319   77926 retry.go:31] will retry after 289.130703ms: waiting for machine to come up
	I0828 18:21:15.644017   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.644460   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.644482   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.644434   77926 retry.go:31] will retry after 240.747341ms: waiting for machine to come up
	I0828 18:21:15.886897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.887308   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.887340   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.887258   77926 retry.go:31] will retry after 467.167731ms: waiting for machine to come up
	I0828 18:21:16.355790   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.356204   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.356232   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.356160   77926 retry.go:31] will retry after 506.51967ms: waiting for machine to come up
	I0828 18:21:16.863907   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.864309   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.864343   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.864264   77926 retry.go:31] will retry after 458.679357ms: waiting for machine to come up
	I0828 18:21:17.324948   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.325436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.325462   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.325385   77926 retry.go:31] will retry after 604.433375ms: waiting for machine to come up
	I0828 18:21:17.931169   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.931568   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.931614   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.931507   77926 retry.go:31] will retry after 852.10168ms: waiting for machine to come up
	I0828 18:21:19.120844   75908 start.go:360] acquireMachinesLock for no-preload-072854: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:21:18.785312   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:18.785735   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:18.785762   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:18.785682   77926 retry.go:31] will retry after 1.332568679s: waiting for machine to come up
	I0828 18:21:20.119550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:20.119990   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:20.120016   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:20.119947   77926 retry.go:31] will retry after 1.606559109s: waiting for machine to come up
	I0828 18:21:21.727719   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:21.728147   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:21.728175   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:21.728091   77926 retry.go:31] will retry after 1.901370923s: waiting for machine to come up
	I0828 18:21:23.632187   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:23.632554   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:23.632578   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:23.632509   77926 retry.go:31] will retry after 2.387413646s: waiting for machine to come up
	I0828 18:21:26.022576   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:26.022902   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:26.022924   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:26.022862   77926 retry.go:31] will retry after 3.196331032s: waiting for machine to come up
	I0828 18:21:33.374810   76486 start.go:364] duration metric: took 4m17.539072759s to acquireMachinesLock for "default-k8s-diff-port-640552"
	I0828 18:21:33.374877   76486 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:33.374898   76486 fix.go:54] fixHost starting: 
	I0828 18:21:33.375317   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:33.375357   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:33.392734   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0828 18:21:33.393239   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:33.393761   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:21:33.393783   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:33.394131   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:33.394347   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:33.394547   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:21:33.395998   76486 fix.go:112] recreateIfNeeded on default-k8s-diff-port-640552: state=Stopped err=<nil>
	I0828 18:21:33.396038   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	W0828 18:21:33.396210   76486 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:33.398362   76486 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-640552" ...
	I0828 18:21:29.220396   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:29.220861   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:29.220897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:29.220820   77926 retry.go:31] will retry after 2.802196616s: waiting for machine to come up
	I0828 18:21:32.026808   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027298   76435 main.go:141] libmachine: (embed-certs-014980) Found IP for machine: 192.168.72.130
	I0828 18:21:32.027319   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has current primary IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027325   76435 main.go:141] libmachine: (embed-certs-014980) Reserving static IP address...
	I0828 18:21:32.027698   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.027764   76435 main.go:141] libmachine: (embed-certs-014980) DBG | skip adding static IP to network mk-embed-certs-014980 - found existing host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"}
	I0828 18:21:32.027781   76435 main.go:141] libmachine: (embed-certs-014980) Reserved static IP address: 192.168.72.130
	I0828 18:21:32.027800   76435 main.go:141] libmachine: (embed-certs-014980) Waiting for SSH to be available...
	I0828 18:21:32.027814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Getting to WaitForSSH function...
	I0828 18:21:32.029740   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030020   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.030051   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030171   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH client type: external
	I0828 18:21:32.030200   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa (-rw-------)
	I0828 18:21:32.030235   76435 main.go:141] libmachine: (embed-certs-014980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:32.030249   76435 main.go:141] libmachine: (embed-certs-014980) DBG | About to run SSH command:
	I0828 18:21:32.030264   76435 main.go:141] libmachine: (embed-certs-014980) DBG | exit 0
	I0828 18:21:32.153760   76435 main.go:141] libmachine: (embed-certs-014980) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:32.154184   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetConfigRaw
	I0828 18:21:32.154807   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.157116   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157449   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.157472   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157662   76435 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/config.json ...
	I0828 18:21:32.157857   76435 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:32.157873   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:32.158051   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.160224   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160516   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.160550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.160877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.160999   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.161141   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.161310   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.161509   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.161528   76435 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:32.270041   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:32.270070   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270351   76435 buildroot.go:166] provisioning hostname "embed-certs-014980"
	I0828 18:21:32.270375   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270568   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.273124   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273480   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.273509   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273626   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.273774   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.273941   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.274062   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.274264   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.274435   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.274448   76435 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-014980 && echo "embed-certs-014980" | sudo tee /etc/hostname
	I0828 18:21:32.401452   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-014980
	
	I0828 18:21:32.401473   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.404278   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404622   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.404672   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404785   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.405012   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405167   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405312   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.405525   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.405697   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.405714   76435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-014980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-014980/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-014980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:32.523970   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:32.523997   76435 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:32.524044   76435 buildroot.go:174] setting up certificates
	I0828 18:21:32.524054   76435 provision.go:84] configureAuth start
	I0828 18:21:32.524063   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.524374   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.527040   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527391   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.527418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527540   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.529680   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.529986   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.530006   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.530170   76435 provision.go:143] copyHostCerts
	I0828 18:21:32.530220   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:32.530237   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:32.530306   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:32.530387   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:32.530399   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:32.530423   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:32.530475   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:32.530481   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:32.530502   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:32.530556   76435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.embed-certs-014980 san=[127.0.0.1 192.168.72.130 embed-certs-014980 localhost minikube]
	I0828 18:21:32.755911   76435 provision.go:177] copyRemoteCerts
	I0828 18:21:32.755967   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:32.755990   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.758640   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.758944   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.758981   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.759123   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.759306   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.759442   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.759554   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:32.843219   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:32.867929   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0828 18:21:32.890143   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:32.911983   76435 provision.go:87] duration metric: took 387.917809ms to configureAuth
	I0828 18:21:32.912013   76435 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:32.912199   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:32.912281   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.914814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915154   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.915188   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915321   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.915550   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915717   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915899   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.916116   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.916323   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.916378   76435 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:33.137477   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:33.137500   76435 machine.go:96] duration metric: took 979.632081ms to provisionDockerMachine
	I0828 18:21:33.137513   76435 start.go:293] postStartSetup for "embed-certs-014980" (driver="kvm2")
	I0828 18:21:33.137526   76435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:33.137564   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.137847   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:33.137877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.140267   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140555   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.140584   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140731   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.140922   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.141078   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.141223   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.224499   76435 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:33.228643   76435 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:33.228672   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:33.228755   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:33.228855   76435 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:33.229038   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:33.238208   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:33.260348   76435 start.go:296] duration metric: took 122.819807ms for postStartSetup
	I0828 18:21:33.260400   76435 fix.go:56] duration metric: took 19.141917324s for fixHost
	I0828 18:21:33.260424   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.262763   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263139   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.263168   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263289   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.263482   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263659   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263871   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.264050   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:33.264216   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:33.264226   76435 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:33.374640   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869293.352212530
	
	I0828 18:21:33.374664   76435 fix.go:216] guest clock: 1724869293.352212530
	I0828 18:21:33.374687   76435 fix.go:229] Guest: 2024-08-28 18:21:33.35221253 +0000 UTC Remote: 2024-08-28 18:21:33.260405829 +0000 UTC m=+259.867297948 (delta=91.806701ms)
	I0828 18:21:33.374708   76435 fix.go:200] guest clock delta is within tolerance: 91.806701ms
	I0828 18:21:33.374713   76435 start.go:83] releasing machines lock for "embed-certs-014980", held for 19.256266619s
	I0828 18:21:33.374735   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.374991   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:33.377975   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378411   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.378436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378623   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379150   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379317   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379409   76435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:33.379465   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.379568   76435 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:33.379594   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.381991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382015   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382323   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382354   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382379   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382438   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382493   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382687   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382876   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382907   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383029   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383033   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.383145   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.508142   76435 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:33.514436   76435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:33.661055   76435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:33.666718   76435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:33.666774   76435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:33.683142   76435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:33.683169   76435 start.go:495] detecting cgroup driver to use...
	I0828 18:21:33.683253   76435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:33.698356   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:33.711626   76435 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:33.711690   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:33.724743   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:33.738782   76435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:33.852946   76435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:33.990370   76435 docker.go:233] disabling docker service ...
	I0828 18:21:33.990440   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:34.004746   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:34.017220   76435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:34.174534   76435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:34.320863   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:34.333880   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:34.351859   76435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:34.351907   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.362142   76435 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:34.362223   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.372261   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.382374   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.396994   76435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:34.412126   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.422585   76435 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.439314   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.449667   76435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:34.458389   76435 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:34.458449   76435 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:34.471501   76435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:34.480915   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:34.617633   76435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:34.731432   76435 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:34.731508   76435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:34.736417   76435 start.go:563] Will wait 60s for crictl version
	I0828 18:21:34.736464   76435 ssh_runner.go:195] Run: which crictl
	I0828 18:21:34.740213   76435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:34.776804   76435 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:34.776908   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.806826   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.837961   76435 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:33.399527   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Start
	I0828 18:21:33.399696   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring networks are active...
	I0828 18:21:33.400382   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network default is active
	I0828 18:21:33.400737   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network mk-default-k8s-diff-port-640552 is active
	I0828 18:21:33.401099   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Getting domain xml...
	I0828 18:21:33.401809   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Creating domain...
	I0828 18:21:34.684850   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting to get IP...
	I0828 18:21:34.685612   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.685998   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.686063   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.685980   78067 retry.go:31] will retry after 291.65765ms: waiting for machine to come up
	I0828 18:21:34.979550   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980029   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980051   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.979993   78067 retry.go:31] will retry after 274.75755ms: waiting for machine to come up
	I0828 18:21:35.256257   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256724   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256752   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.256666   78067 retry.go:31] will retry after 455.404257ms: waiting for machine to come up
	I0828 18:21:35.714147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714683   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714716   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.714635   78067 retry.go:31] will retry after 426.56406ms: waiting for machine to come up
	I0828 18:21:34.839157   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:34.842000   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842417   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:34.842443   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842650   76435 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:34.846628   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:34.859098   76435 kubeadm.go:883] updating cluster {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:34.859212   76435 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:34.859259   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:34.898150   76435 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:34.898233   76435 ssh_runner.go:195] Run: which lz4
	I0828 18:21:34.902220   76435 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:34.906463   76435 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:34.906498   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:36.168426   76435 crio.go:462] duration metric: took 1.26624881s to copy over tarball
	I0828 18:21:36.168514   76435 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:38.266205   76435 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.097659696s)
	I0828 18:21:38.266252   76435 crio.go:469] duration metric: took 2.097775234s to extract the tarball
	I0828 18:21:38.266264   76435 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:38.302870   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:38.349495   76435 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:38.349527   76435 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:38.349538   76435 kubeadm.go:934] updating node { 192.168.72.130 8443 v1.31.0 crio true true} ...
	I0828 18:21:38.349672   76435 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-014980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:38.349761   76435 ssh_runner.go:195] Run: crio config
	I0828 18:21:38.393310   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:38.393333   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:38.393346   76435 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:38.393367   76435 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-014980 NodeName:embed-certs-014980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:38.393502   76435 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-014980"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:38.393561   76435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:38.403059   76435 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:38.403128   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:38.411944   76435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0828 18:21:38.427006   76435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:36.143403   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143961   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.143901   78067 retry.go:31] will retry after 623.404625ms: waiting for machine to come up
	I0828 18:21:36.768738   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769339   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.769256   78067 retry.go:31] will retry after 750.082443ms: waiting for machine to come up
	I0828 18:21:37.521122   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521604   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521633   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:37.521562   78067 retry.go:31] will retry after 837.989492ms: waiting for machine to come up
	I0828 18:21:38.361659   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362111   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362140   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:38.362056   78067 retry.go:31] will retry after 1.13122193s: waiting for machine to come up
	I0828 18:21:39.495248   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495643   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495673   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:39.495578   78067 retry.go:31] will retry after 1.180862235s: waiting for machine to come up
	I0828 18:21:40.677748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678090   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678117   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:40.678045   78067 retry.go:31] will retry after 2.245023454s: waiting for machine to come up
	I0828 18:21:38.441960   76435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0828 18:21:38.457509   76435 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:38.461003   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:38.472232   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:38.591387   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:38.606911   76435 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980 for IP: 192.168.72.130
	I0828 18:21:38.606935   76435 certs.go:194] generating shared ca certs ...
	I0828 18:21:38.606957   76435 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:38.607122   76435 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:38.607186   76435 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:38.607199   76435 certs.go:256] generating profile certs ...
	I0828 18:21:38.607304   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/client.key
	I0828 18:21:38.607398   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key.f4b1f9f1
	I0828 18:21:38.607449   76435 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key
	I0828 18:21:38.607595   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:38.607634   76435 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:38.607646   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:38.607679   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:38.607726   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:38.607756   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:38.607808   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:38.608698   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:38.647796   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:38.685835   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:38.738515   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:38.769248   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0828 18:21:38.795091   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:38.816857   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:38.839153   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:38.861024   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:38.882488   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:38.905023   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:38.927997   76435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:38.945870   76435 ssh_runner.go:195] Run: openssl version
	I0828 18:21:38.951753   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:38.962635   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966847   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966895   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.972529   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:38.982689   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:38.992812   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996942   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996991   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:39.002359   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:39.012423   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:39.022765   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.026945   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.027007   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.032233   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:39.042709   76435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:39.046904   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:39.052563   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:39.057937   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:39.063465   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:39.068788   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:39.074233   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:39.079673   76435 kubeadm.go:392] StartCluster: {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:39.079776   76435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:39.079824   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.120250   76435 cri.go:89] found id: ""
	I0828 18:21:39.120331   76435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:39.130147   76435 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:39.130171   76435 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:39.130223   76435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:39.139586   76435 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:39.140642   76435 kubeconfig.go:125] found "embed-certs-014980" server: "https://192.168.72.130:8443"
	I0828 18:21:39.142695   76435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:39.152102   76435 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I0828 18:21:39.152136   76435 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:39.152149   76435 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:39.152225   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.189811   76435 cri.go:89] found id: ""
	I0828 18:21:39.189899   76435 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:39.205579   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:39.215378   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:39.215401   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:39.215451   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:21:39.225068   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:39.225136   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:39.234254   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:21:39.243009   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:39.243072   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:39.252251   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.261241   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:39.261314   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.270443   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:21:39.278999   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:39.279070   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:39.288033   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:39.297331   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:39.396232   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.225819   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.420586   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.482893   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.601563   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:40.601672   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.101955   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.602572   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.102180   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.602520   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.635705   76435 api_server.go:72] duration metric: took 2.034151361s to wait for apiserver process to appear ...
	I0828 18:21:42.635738   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:21:42.635762   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.636263   76435 api_server.go:269] stopped: https://192.168.72.130:8443/healthz: Get "https://192.168.72.130:8443/healthz": dial tcp 192.168.72.130:8443: connect: connection refused
	I0828 18:21:43.136019   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.925748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926265   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926293   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:42.926217   78067 retry.go:31] will retry after 2.565646238s: waiting for machine to come up
	I0828 18:21:45.494477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495032   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495058   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:45.494982   78067 retry.go:31] will retry after 2.418376782s: waiting for machine to come up
	I0828 18:21:45.980398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:45.980429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:45.980444   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.010352   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:46.010385   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:46.136576   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.141398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.141429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:46.635898   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.641672   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.641712   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.136295   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.142623   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:47.142657   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.636199   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.640325   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:21:47.647198   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:21:47.647226   76435 api_server.go:131] duration metric: took 5.011481159s to wait for apiserver health ...
	I0828 18:21:47.647236   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:47.647245   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:47.649638   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:21:47.650998   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:21:47.662361   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:21:47.683446   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:21:47.696066   76435 system_pods.go:59] 8 kube-system pods found
	I0828 18:21:47.696100   76435 system_pods.go:61] "coredns-6f6b679f8f-4g2n8" [9c34e013-4c11-4805-9d58-987bb130f1b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:21:47.696120   76435 system_pods.go:61] "etcd-embed-certs-014980" [164f2ce3-8df6-4e56-a959-80b08848a181] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:21:47.696133   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [c637e3e0-4e54-44b1-8eb7-ea11d3b5753a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:21:47.696143   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [2d786cc0-a0c7-430c-89e1-9889e919289d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:21:47.696149   76435 system_pods.go:61] "kube-proxy-4lz5q" [a5f2213b-6b36-4656-8a26-26903bc09441] Running
	I0828 18:21:47.696158   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [2aa3787a-7a70-4cfb-8810-9f4e02240012] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:21:47.696167   76435 system_pods.go:61] "metrics-server-6867b74b74-f56j2" [91d30fa3-cc63-4d61-8cd3-46ecc950c31f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:21:47.696176   76435 system_pods.go:61] "storage-provisioner" [54d357f5-8f8a-429b-81db-40c9f2857fde] Running
	I0828 18:21:47.696185   76435 system_pods.go:74] duration metric: took 12.718326ms to wait for pod list to return data ...
	I0828 18:21:47.696198   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:21:47.699492   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:21:47.699515   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:21:47.699528   76435 node_conditions.go:105] duration metric: took 3.324668ms to run NodePressure ...
	I0828 18:21:47.699548   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:47.970122   76435 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973854   76435 kubeadm.go:739] kubelet initialised
	I0828 18:21:47.973874   76435 kubeadm.go:740] duration metric: took 3.724056ms waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973881   76435 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:21:47.978377   76435 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:21:47.916599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.916976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.917015   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:47.916941   78067 retry.go:31] will retry after 3.1564792s: waiting for machine to come up
	I0828 18:21:52.286982   77396 start.go:364] duration metric: took 3m6.98234152s to acquireMachinesLock for "old-k8s-version-131737"
	I0828 18:21:52.287057   77396 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:52.287069   77396 fix.go:54] fixHost starting: 
	I0828 18:21:52.287554   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:52.287595   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:52.305954   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0828 18:21:52.306439   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:52.306908   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:21:52.306928   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:52.307228   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:52.307404   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:21:52.307571   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetState
	I0828 18:21:52.309284   77396 fix.go:112] recreateIfNeeded on old-k8s-version-131737: state=Stopped err=<nil>
	I0828 18:21:52.309322   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	W0828 18:21:52.309508   77396 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:52.311369   77396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-131737" ...
	I0828 18:21:49.984379   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.985536   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.075186   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.075681   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Found IP for machine: 192.168.39.226
	I0828 18:21:51.075698   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserving static IP address...
	I0828 18:21:51.075746   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has current primary IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.076159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.076184   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | skip adding static IP to network mk-default-k8s-diff-port-640552 - found existing host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"}
	I0828 18:21:51.076201   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserved static IP address: 192.168.39.226
	I0828 18:21:51.076218   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for SSH to be available...
	I0828 18:21:51.076230   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Getting to WaitForSSH function...
	I0828 18:21:51.078435   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078745   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.078766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078967   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH client type: external
	I0828 18:21:51.079000   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa (-rw-------)
	I0828 18:21:51.079053   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:51.079079   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | About to run SSH command:
	I0828 18:21:51.079114   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | exit 0
	I0828 18:21:51.205844   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:51.206145   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetConfigRaw
	I0828 18:21:51.206821   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.209159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.209563   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209753   76486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/config.json ...
	I0828 18:21:51.209980   76486 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:51.209999   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:51.210244   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.212321   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212651   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.212677   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212800   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.212971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213273   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.213408   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.213639   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.213650   76486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:51.330211   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:51.330249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330530   76486 buildroot.go:166] provisioning hostname "default-k8s-diff-port-640552"
	I0828 18:21:51.330558   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330820   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.333492   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.333855   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.333885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.334027   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.334249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334469   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334658   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.334844   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.335003   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.335015   76486 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-640552 && echo "default-k8s-diff-port-640552" | sudo tee /etc/hostname
	I0828 18:21:51.459660   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-640552
	
	I0828 18:21:51.459690   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.462286   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462636   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.462668   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462842   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.463034   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463181   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463307   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.463470   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.463650   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.463682   76486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-640552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-640552/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-640552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:51.581714   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:51.581740   76486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:51.581777   76486 buildroot.go:174] setting up certificates
	I0828 18:21:51.581792   76486 provision.go:84] configureAuth start
	I0828 18:21:51.581807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.582130   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.584626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.584945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.584976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.585073   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.587285   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587672   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.587700   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587868   76486 provision.go:143] copyHostCerts
	I0828 18:21:51.587926   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:51.587946   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:51.588003   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:51.588092   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:51.588100   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:51.588124   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:51.588244   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:51.588255   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:51.588277   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:51.588332   76486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-640552 san=[127.0.0.1 192.168.39.226 default-k8s-diff-port-640552 localhost minikube]
	I0828 18:21:51.657408   76486 provision.go:177] copyRemoteCerts
	I0828 18:21:51.657457   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:51.657480   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.660152   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660494   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.660514   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660709   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.660911   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.661078   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.661251   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:51.751729   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:51.773473   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0828 18:21:51.796174   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:51.817640   76486 provision.go:87] duration metric: took 235.828003ms to configureAuth
	I0828 18:21:51.817672   76486 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:51.817892   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:51.817983   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.820433   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.820780   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.820807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.821016   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.821214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821371   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821533   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.821684   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.821852   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.821870   76486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:52.048026   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:52.048055   76486 machine.go:96] duration metric: took 838.061836ms to provisionDockerMachine
	I0828 18:21:52.048067   76486 start.go:293] postStartSetup for "default-k8s-diff-port-640552" (driver="kvm2")
	I0828 18:21:52.048078   76486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:52.048097   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.048437   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:52.048472   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.051115   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051385   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.051410   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051597   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.051815   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.051971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.052066   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.136350   76486 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:52.140200   76486 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:52.140228   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:52.140303   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:52.140397   76486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:52.140496   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:52.149451   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:52.172381   76486 start.go:296] duration metric: took 124.302384ms for postStartSetup
	I0828 18:21:52.172450   76486 fix.go:56] duration metric: took 18.797536411s for fixHost
	I0828 18:21:52.172477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.174891   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175255   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.175274   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175474   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.175631   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175788   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.176100   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:52.176279   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:52.176289   76486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:52.286801   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869312.259614140
	
	I0828 18:21:52.286827   76486 fix.go:216] guest clock: 1724869312.259614140
	I0828 18:21:52.286835   76486 fix.go:229] Guest: 2024-08-28 18:21:52.25961414 +0000 UTC Remote: 2024-08-28 18:21:52.172457684 +0000 UTC m=+276.471609311 (delta=87.156456ms)
	I0828 18:21:52.286854   76486 fix.go:200] guest clock delta is within tolerance: 87.156456ms
	I0828 18:21:52.286859   76486 start.go:83] releasing machines lock for "default-k8s-diff-port-640552", held for 18.912007294s
	I0828 18:21:52.286884   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.287148   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:52.289951   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290346   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.290370   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290500   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.290976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291228   76486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:52.291282   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.291325   76486 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:52.291344   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.294010   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294039   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294464   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294490   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294637   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294685   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294896   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295185   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295331   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295326   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.295560   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.380284   76486 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:52.421868   76486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:52.563478   76486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:52.569318   76486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:52.569408   76486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:52.585683   76486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:52.585709   76486 start.go:495] detecting cgroup driver to use...
	I0828 18:21:52.585781   76486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:52.603511   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:52.616868   76486 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:52.616930   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:52.631574   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:52.644913   76486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:52.762863   76486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:52.920107   76486 docker.go:233] disabling docker service ...
	I0828 18:21:52.920183   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:52.937155   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:52.951124   76486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:53.063496   76486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:53.187655   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:53.201452   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:53.219663   76486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:53.219734   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.230165   76486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:53.230247   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.240480   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.251258   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.262763   76486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:53.273597   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.283571   76486 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.302935   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.313508   76486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:53.322781   76486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:53.322850   76486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:53.337049   76486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:53.347349   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:53.455027   76486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:53.551547   76486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:53.551607   76486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:53.556960   76486 start.go:563] Will wait 60s for crictl version
	I0828 18:21:53.557066   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:21:53.560695   76486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:53.603636   76486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:53.603721   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.632017   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.664760   76486 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:52.312648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .Start
	I0828 18:21:52.312862   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring networks are active...
	I0828 18:21:52.313682   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network default is active
	I0828 18:21:52.314112   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network mk-old-k8s-version-131737 is active
	I0828 18:21:52.314488   77396 main.go:141] libmachine: (old-k8s-version-131737) Getting domain xml...
	I0828 18:21:52.315180   77396 main.go:141] libmachine: (old-k8s-version-131737) Creating domain...
	I0828 18:21:53.582013   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting to get IP...
	I0828 18:21:53.583124   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.583609   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.583672   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.583582   78246 retry.go:31] will retry after 289.679773ms: waiting for machine to come up
	I0828 18:21:53.875299   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.876115   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.876144   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.876051   78246 retry.go:31] will retry after 263.317798ms: waiting for machine to come up
	I0828 18:21:54.141733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.142310   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.142340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.142257   78246 retry.go:31] will retry after 440.224905ms: waiting for machine to come up
	I0828 18:21:54.584505   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.585061   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.585084   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.585018   78246 retry.go:31] will retry after 379.546405ms: waiting for machine to come up
	I0828 18:21:54.966516   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.967130   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.967153   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.967045   78246 retry.go:31] will retry after 754.463377ms: waiting for machine to come up
	I0828 18:21:53.665810   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:53.668882   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669330   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:53.669352   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669589   76486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:53.673693   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:53.685432   76486 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:53.685546   76486 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:53.685593   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:53.720069   76486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:53.720129   76486 ssh_runner.go:195] Run: which lz4
	I0828 18:21:53.723841   76486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:53.727666   76486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:53.727697   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:54.993725   76486 crio.go:462] duration metric: took 1.269921848s to copy over tarball
	I0828 18:21:54.993802   76486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:53.987677   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:56.485568   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:55.723533   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:55.724021   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:55.724042   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:55.723980   78246 retry.go:31] will retry after 607.743145ms: waiting for machine to come up
	I0828 18:21:56.333733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:56.334181   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:56.334210   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:56.334135   78246 retry.go:31] will retry after 1.098394488s: waiting for machine to come up
	I0828 18:21:57.433729   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:57.434212   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:57.434243   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:57.434157   78246 retry.go:31] will retry after 1.195993343s: waiting for machine to come up
	I0828 18:21:58.631451   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:58.631839   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:58.631867   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:58.631798   78246 retry.go:31] will retry after 1.807712472s: waiting for machine to come up
	I0828 18:21:57.135009   76486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.141177811s)
	I0828 18:21:57.135041   76486 crio.go:469] duration metric: took 2.141292479s to extract the tarball
	I0828 18:21:57.135051   76486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:57.172381   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:57.211971   76486 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:57.211993   76486 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:57.212003   76486 kubeadm.go:934] updating node { 192.168.39.226 8444 v1.31.0 crio true true} ...
	I0828 18:21:57.212123   76486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-640552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:57.212202   76486 ssh_runner.go:195] Run: crio config
	I0828 18:21:57.254347   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:21:57.254378   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:57.254402   76486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:57.254431   76486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-640552 NodeName:default-k8s-diff-port-640552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:57.254630   76486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-640552"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:57.254715   76486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:57.264233   76486 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:57.264304   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:57.273293   76486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0828 18:21:57.289211   76486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:57.304829   76486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0828 18:21:57.323062   76486 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:57.326891   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:57.339775   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:57.463802   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:57.479266   76486 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552 for IP: 192.168.39.226
	I0828 18:21:57.479288   76486 certs.go:194] generating shared ca certs ...
	I0828 18:21:57.479325   76486 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:57.479519   76486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:57.479570   76486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:57.479584   76486 certs.go:256] generating profile certs ...
	I0828 18:21:57.479702   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/client.key
	I0828 18:21:57.479774   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key.90f46fd7
	I0828 18:21:57.479829   76486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key
	I0828 18:21:57.479977   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:57.480018   76486 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:57.480031   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:57.480071   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:57.480109   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:57.480142   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:57.480199   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:57.481063   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:57.514802   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:57.555506   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:57.585381   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:57.613009   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0828 18:21:57.637776   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:57.662590   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:57.684482   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:57.707287   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:57.728392   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:57.750217   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:57.771310   76486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:57.786814   76486 ssh_runner.go:195] Run: openssl version
	I0828 18:21:57.792053   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:57.802301   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806552   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806627   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.812238   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:57.824231   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:57.834783   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.838954   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.839008   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.844456   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:57.856262   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:57.867737   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872040   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872122   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.877506   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:57.889018   76486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:57.893303   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:57.899199   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:57.907716   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:57.915801   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:57.923795   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:57.929601   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:57.935563   76486 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:57.935655   76486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:57.935698   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:57.975236   76486 cri.go:89] found id: ""
	I0828 18:21:57.975308   76486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:57.986945   76486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:57.986966   76486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:57.987014   76486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:57.996355   76486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:57.997293   76486 kubeconfig.go:125] found "default-k8s-diff-port-640552" server: "https://192.168.39.226:8444"
	I0828 18:21:57.999257   76486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:58.008531   76486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.226
	I0828 18:21:58.008555   76486 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:58.008564   76486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:58.008612   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:58.054603   76486 cri.go:89] found id: ""
	I0828 18:21:58.054681   76486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:58.072017   76486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:58.085982   76486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:58.086007   76486 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:58.086087   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0828 18:21:58.094721   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:58.094790   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:58.108457   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0828 18:21:58.120495   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:58.120568   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:58.130432   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.139428   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:58.139495   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.148537   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0828 18:21:58.157182   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:58.157241   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:58.166178   76486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:58.175176   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:58.276043   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.072360   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.270937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.344719   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.442568   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:59.442664   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:59.942860   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:00.443271   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:58.485632   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:00.694313   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:00.694341   76435 pod_ready.go:82] duration metric: took 12.71594065s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.694354   76435 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210752   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.210805   76435 pod_ready.go:82] duration metric: took 516.442507ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210821   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218781   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.218809   76435 pod_ready.go:82] duration metric: took 7.979295ms for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218823   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725883   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.725914   76435 pod_ready.go:82] duration metric: took 507.08194ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725932   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731866   76435 pod_ready.go:93] pod "kube-proxy-4lz5q" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.731891   76435 pod_ready.go:82] duration metric: took 5.951323ms for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731903   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737160   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.737191   76435 pod_ready.go:82] duration metric: took 5.279341ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737203   76435 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.441679   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:00.442149   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:00.442178   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:00.442063   78246 retry.go:31] will retry after 2.175897132s: waiting for machine to come up
	I0828 18:22:02.620076   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:02.620562   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:02.620589   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:02.620527   78246 retry.go:31] will retry after 1.749248103s: waiting for machine to come up
	I0828 18:22:04.371390   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:04.371924   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:04.371969   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:04.371875   78246 retry.go:31] will retry after 2.412168623s: waiting for machine to come up
	I0828 18:22:00.943566   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.443708   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.943361   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.957227   76486 api_server.go:72] duration metric: took 2.514666609s to wait for apiserver process to appear ...
	I0828 18:22:01.957258   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:01.957281   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.174923   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.174955   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.174970   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.227506   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.227540   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.457869   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.463842   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.463884   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:04.957398   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.964576   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.964606   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:05.457724   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:05.461808   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:22:05.467732   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:05.467757   76486 api_server.go:131] duration metric: took 3.510492089s to wait for apiserver health ...
	I0828 18:22:05.467766   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:22:05.467771   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:05.469553   76486 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:05.470759   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:05.481858   76486 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:05.500789   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:05.512306   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:05.512336   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:05.512343   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:05.512353   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:05.512360   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:05.512368   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:05.512379   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:05.512386   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:05.512396   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:05.512405   76486 system_pods.go:74] duration metric: took 11.592471ms to wait for pod list to return data ...
	I0828 18:22:05.512419   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:05.516136   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:05.516167   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:05.516182   76486 node_conditions.go:105] duration metric: took 3.757746ms to run NodePressure ...
	I0828 18:22:05.516205   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:05.793448   76486 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798810   76486 kubeadm.go:739] kubelet initialised
	I0828 18:22:05.798827   76486 kubeadm.go:740] duration metric: took 5.35696ms waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798835   76486 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:05.803644   76486 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.808185   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808206   76486 pod_ready.go:82] duration metric: took 4.541551ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.808214   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808226   76486 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.812918   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812941   76486 pod_ready.go:82] duration metric: took 4.703246ms for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.812950   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812956   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.817019   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817036   76486 pod_ready.go:82] duration metric: took 4.075009ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.817045   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817050   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.904575   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904606   76486 pod_ready.go:82] duration metric: took 87.547744ms for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.904621   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904628   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.304141   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304168   76486 pod_ready.go:82] duration metric: took 399.53302ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.304177   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304182   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.704632   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704663   76486 pod_ready.go:82] duration metric: took 400.470144ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.704677   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704686   76486 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:07.104218   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104247   76486 pod_ready.go:82] duration metric: took 399.550393ms for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:07.104261   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104270   76486 pod_ready.go:39] duration metric: took 1.305425633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:07.104296   76486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:07.115055   76486 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:07.115077   76486 kubeadm.go:597] duration metric: took 9.128104649s to restartPrimaryControlPlane
	I0828 18:22:07.115085   76486 kubeadm.go:394] duration metric: took 9.179528813s to StartCluster
	I0828 18:22:07.115105   76486 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.115169   76486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:07.116738   76486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.116962   76486 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:07.117026   76486 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:07.117104   76486 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117121   76486 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117136   76486 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117150   76486 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:07.117175   76486 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-640552"
	I0828 18:22:07.117185   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117191   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:07.117166   76486 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117280   76486 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117291   76486 addons.go:243] addon metrics-server should already be in state true
	I0828 18:22:07.117316   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117551   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117585   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117607   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117622   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117666   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117687   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.118665   76486 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:07.119962   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0828 18:22:07.133468   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133474   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133473   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0828 18:22:07.133904   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.134022   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134039   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134044   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134055   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134378   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134405   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134416   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134425   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134582   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.134742   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134990   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135019   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.135331   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135358   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.142488   76486 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.142508   76486 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:07.142534   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.142790   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.142845   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.151553   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0828 18:22:07.152067   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.152561   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.152578   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.152988   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.153172   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.153267   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0828 18:22:07.153647   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.154071   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.154118   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.154424   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.154657   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.155656   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.156384   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.158167   76486 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:07.158170   76486 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:03.743115   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:06.246448   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:07.159313   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0828 18:22:07.159655   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.159730   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:07.159748   76486 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:07.159766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.159877   76486 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.159893   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:07.159908   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.160069   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.160087   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.160501   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.160999   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.161042   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.163522   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163560   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163954   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163960   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163980   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163989   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.164249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164451   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164455   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164575   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164746   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.164806   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.177679   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0828 18:22:07.178179   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.178711   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.178732   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.179027   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.179214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.180671   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.180897   76486 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.180912   76486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:07.180931   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.183194   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183530   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.183619   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183784   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.183935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.184064   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.184197   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.320359   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:07.338447   76486 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:07.422788   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.478274   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:07.478295   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:07.481718   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.539263   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:07.539287   76486 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:07.610393   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:07.610415   76486 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:07.671875   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:08.436371   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436397   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436468   76486 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.013643707s)
	I0828 18:22:08.436507   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436690   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436708   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436720   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436728   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436823   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.436836   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436848   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436857   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436866   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436939   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436952   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.437124   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.437174   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.437198   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.442852   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.442871   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.443116   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.443135   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601340   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601386   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601681   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.601728   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601743   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601753   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601998   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.602020   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.602030   76486 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-640552"
	I0828 18:22:08.603833   76486 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:06.787073   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:06.787468   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:06.787506   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:06.787418   78246 retry.go:31] will retry after 3.844761666s: waiting for machine to come up
	I0828 18:22:08.605028   76486 addons.go:510] duration metric: took 1.488006928s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:09.342263   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:11.990693   75908 start.go:364] duration metric: took 52.869802321s to acquireMachinesLock for "no-preload-072854"
	I0828 18:22:11.990749   75908 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:22:11.990756   75908 fix.go:54] fixHost starting: 
	I0828 18:22:11.991173   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:11.991211   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:12.008247   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0828 18:22:12.008729   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:12.009170   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:12.009193   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:12.009534   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:12.009732   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:12.009873   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:12.011416   75908 fix.go:112] recreateIfNeeded on no-preload-072854: state=Stopped err=<nil>
	I0828 18:22:12.011442   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	W0828 18:22:12.011599   75908 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:22:12.013401   75908 out.go:177] * Restarting existing kvm2 VM for "no-preload-072854" ...
	I0828 18:22:08.747994   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:11.243666   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:13.245991   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:10.635599   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.635992   77396 main.go:141] libmachine: (old-k8s-version-131737) Found IP for machine: 192.168.50.99
	I0828 18:22:10.636017   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserving static IP address...
	I0828 18:22:10.636035   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has current primary IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.636476   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserved static IP address: 192.168.50.99
	I0828 18:22:10.636507   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting for SSH to be available...
	I0828 18:22:10.636529   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.636550   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | skip adding static IP to network mk-old-k8s-version-131737 - found existing host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"}
	I0828 18:22:10.636565   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Getting to WaitForSSH function...
	I0828 18:22:10.638762   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639118   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.639150   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639274   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH client type: external
	I0828 18:22:10.639295   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa (-rw-------)
	I0828 18:22:10.639324   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:10.639340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | About to run SSH command:
	I0828 18:22:10.639368   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | exit 0
	I0828 18:22:10.765932   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:10.766339   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetConfigRaw
	I0828 18:22:10.767003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:10.769525   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770006   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.770045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770184   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:22:10.770396   77396 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:10.770418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:10.770671   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.772685   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773010   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.773031   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773182   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.773396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773583   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773739   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.773904   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.774112   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.774125   77396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:10.874115   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:10.874150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874366   77396 buildroot.go:166] provisioning hostname "old-k8s-version-131737"
	I0828 18:22:10.874396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874600   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.876804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877106   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.877132   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877237   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.877445   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877604   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877763   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.877921   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.878123   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.878139   77396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-131737 && echo "old-k8s-version-131737" | sudo tee /etc/hostname
	I0828 18:22:10.999107   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-131737
	
	I0828 18:22:10.999144   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.002327   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.002771   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.002802   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.003036   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.003221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003425   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003610   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.003769   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.003968   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.003986   77396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-131737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-131737/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-131737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:11.119461   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:11.119493   77396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:11.119523   77396 buildroot.go:174] setting up certificates
	I0828 18:22:11.119535   77396 provision.go:84] configureAuth start
	I0828 18:22:11.119547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:11.119813   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.122564   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.122916   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.122945   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.123121   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.125575   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.125946   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.125973   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.126103   77396 provision.go:143] copyHostCerts
	I0828 18:22:11.126169   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:11.126192   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:11.126258   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:11.126390   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:11.126416   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:11.126453   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:11.126551   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:11.126565   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:11.126596   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:11.126678   77396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-131737 san=[127.0.0.1 192.168.50.99 localhost minikube old-k8s-version-131737]
	I0828 18:22:11.382096   77396 provision.go:177] copyRemoteCerts
	I0828 18:22:11.382161   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:11.382189   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.384698   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.385071   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.385394   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.385527   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.385669   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.463818   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:11.487677   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0828 18:22:11.510454   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 18:22:11.532302   77396 provision.go:87] duration metric: took 412.75597ms to configureAuth
	I0828 18:22:11.532331   77396 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:11.532551   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:22:11.532627   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.535284   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535668   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.535700   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535816   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.536003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536138   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536317   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.536444   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.536599   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.536626   77396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:11.757267   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:11.757297   77396 machine.go:96] duration metric: took 986.887935ms to provisionDockerMachine
	I0828 18:22:11.757311   77396 start.go:293] postStartSetup for "old-k8s-version-131737" (driver="kvm2")
	I0828 18:22:11.757325   77396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:11.757341   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.757701   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:11.757761   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.760433   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760764   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.760804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760949   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.761117   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.761288   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.761467   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.842091   77396 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:11.846271   77396 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:11.846294   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:11.846357   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:11.846452   77396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:11.846590   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:11.856373   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:11.879153   77396 start.go:296] duration metric: took 121.830018ms for postStartSetup
	I0828 18:22:11.879193   77396 fix.go:56] duration metric: took 19.592124568s for fixHost
	I0828 18:22:11.879218   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.882110   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882588   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.882638   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882814   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.883017   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883241   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883383   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.883540   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.883704   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.883715   77396 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:11.990532   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869331.947970723
	
	I0828 18:22:11.990563   77396 fix.go:216] guest clock: 1724869331.947970723
	I0828 18:22:11.990574   77396 fix.go:229] Guest: 2024-08-28 18:22:11.947970723 +0000 UTC Remote: 2024-08-28 18:22:11.879198847 +0000 UTC m=+206.714077766 (delta=68.771876ms)
	I0828 18:22:11.990599   77396 fix.go:200] guest clock delta is within tolerance: 68.771876ms
	I0828 18:22:11.990605   77396 start.go:83] releasing machines lock for "old-k8s-version-131737", held for 19.703582254s
	I0828 18:22:11.990648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.990935   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.993283   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993690   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.993725   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993908   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994630   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994718   77396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:11.994768   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.994836   77396 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:11.994864   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.997521   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997693   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997952   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.997974   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998001   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.998022   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998251   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998384   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998466   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998650   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998665   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.998813   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:12.079201   77396 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:12.116862   77396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:12.268437   77396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:12.274689   77396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:12.274768   77396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:12.299532   77396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:12.299561   77396 start.go:495] detecting cgroup driver to use...
	I0828 18:22:12.299633   77396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:12.321322   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:12.336273   77396 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:12.336345   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:12.350625   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:12.364155   77396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:12.475639   77396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:12.636052   77396 docker.go:233] disabling docker service ...
	I0828 18:22:12.636144   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:12.655431   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:12.673744   77396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:12.865232   77396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:12.993530   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:13.006666   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:13.023529   77396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0828 18:22:13.023617   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.032944   77396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:13.033014   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.042494   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.052172   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.062869   77396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:13.073254   77396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:13.081968   77396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:13.082032   77396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:13.096163   77396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:13.106942   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:13.229752   77396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:13.333809   77396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:13.333870   77396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:13.339539   77396 start.go:563] Will wait 60s for crictl version
	I0828 18:22:13.339615   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:13.343618   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:13.387552   77396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:13.387647   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.417440   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.451222   77396 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0828 18:22:13.452432   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:13.455750   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456127   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:13.456158   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456465   77396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:13.460719   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:13.474168   77396 kubeadm.go:883] updating cluster {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:13.474315   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:22:13.474381   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:13.519869   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:13.519940   77396 ssh_runner.go:195] Run: which lz4
	I0828 18:22:13.524479   77396 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:22:13.528475   77396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:22:13.528511   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0828 18:22:15.039582   77396 crio.go:462] duration metric: took 1.515144029s to copy over tarball
	I0828 18:22:15.039666   77396 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:22:11.342592   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:13.343159   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:14.844412   76486 node_ready.go:49] node "default-k8s-diff-port-640552" has status "Ready":"True"
	I0828 18:22:14.844443   76486 node_ready.go:38] duration metric: took 7.505958149s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:14.844457   76486 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:14.852970   76486 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858426   76486 pod_ready.go:93] pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:14.858454   76486 pod_ready.go:82] duration metric: took 5.455024ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858467   76486 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:12.014690   75908 main.go:141] libmachine: (no-preload-072854) Calling .Start
	I0828 18:22:12.014870   75908 main.go:141] libmachine: (no-preload-072854) Ensuring networks are active...
	I0828 18:22:12.015716   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network default is active
	I0828 18:22:12.016229   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network mk-no-preload-072854 is active
	I0828 18:22:12.016663   75908 main.go:141] libmachine: (no-preload-072854) Getting domain xml...
	I0828 18:22:12.017534   75908 main.go:141] libmachine: (no-preload-072854) Creating domain...
	I0828 18:22:13.381018   75908 main.go:141] libmachine: (no-preload-072854) Waiting to get IP...
	I0828 18:22:13.381905   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.382463   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.382515   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.382439   78447 retry.go:31] will retry after 308.332294ms: waiting for machine to come up
	I0828 18:22:13.692047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.692496   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.692537   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.692434   78447 retry.go:31] will retry after 374.325088ms: waiting for machine to come up
	I0828 18:22:14.068154   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.068770   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.068799   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.068736   78447 retry.go:31] will retry after 465.939187ms: waiting for machine to come up
	I0828 18:22:14.536497   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.537032   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.537055   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.536989   78447 retry.go:31] will retry after 374.795357ms: waiting for machine to come up
	I0828 18:22:14.913413   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.914015   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.914047   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.913964   78447 retry.go:31] will retry after 726.118647ms: waiting for machine to come up
	I0828 18:22:15.641971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:15.642532   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:15.642559   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:15.642483   78447 retry.go:31] will retry after 951.90632ms: waiting for machine to come up
	I0828 18:22:15.745367   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.244292   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.094470   77396 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.054779864s)
	I0828 18:22:18.094500   77396 crio.go:469] duration metric: took 3.054883651s to extract the tarball
	I0828 18:22:18.094507   77396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:22:18.138235   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:18.172461   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:18.172484   77396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:18.172527   77396 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.172572   77396 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.172589   77396 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.172646   77396 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0828 18:22:18.172819   77396 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.172608   77396 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.172823   77396 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.172990   77396 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174545   77396 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.174579   77396 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.174598   77396 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0828 18:22:18.174609   77396 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.174904   77396 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.415540   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0828 18:22:18.461528   77396 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0828 18:22:18.461577   77396 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0828 18:22:18.461617   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.466065   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.471602   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.476041   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.480111   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.484307   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.500185   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.519236   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.538341   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.614022   77396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0828 18:22:18.614068   77396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.614150   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649875   77396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0828 18:22:18.649927   77396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.649945   77396 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0828 18:22:18.649976   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649980   77396 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.650035   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.665128   77396 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0828 18:22:18.665173   77396 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.665225   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686246   77396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0828 18:22:18.686288   77396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.686303   77396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0828 18:22:18.686336   77396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.686375   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686417   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.686339   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686483   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.686527   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.686558   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.686599   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775824   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775875   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.803911   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.803983   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0828 18:22:18.822129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.822230   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.822232   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.912309   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.912514   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.912662   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:19.003169   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003183   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0828 18:22:19.003201   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:19.003137   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:19.003292   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:19.108957   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0828 18:22:19.109000   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0828 18:22:19.109047   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0828 18:22:19.108961   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0828 18:22:19.109144   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0828 18:22:19.340554   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:19.486655   77396 cache_images.go:92] duration metric: took 1.314154463s to LoadCachedImages
	W0828 18:22:19.486742   77396 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0828 18:22:19.486760   77396 kubeadm.go:934] updating node { 192.168.50.99 8443 v1.20.0 crio true true} ...
	I0828 18:22:19.486898   77396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-131737 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:19.486979   77396 ssh_runner.go:195] Run: crio config
	I0828 18:22:19.530549   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:22:19.530579   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:19.530592   77396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:19.530621   77396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.99 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-131737 NodeName:old-k8s-version-131737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0828 18:22:19.530797   77396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-131737"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:19.530870   77396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0828 18:22:19.545081   77396 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:19.545179   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:19.558002   77396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0828 18:22:19.577056   77396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:19.595848   77396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0828 18:22:19.614164   77396 ssh_runner.go:195] Run: grep 192.168.50.99	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:19.618274   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.99	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:19.631776   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:19.775809   77396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:19.793491   77396 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737 for IP: 192.168.50.99
	I0828 18:22:19.793521   77396 certs.go:194] generating shared ca certs ...
	I0828 18:22:19.793544   77396 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:19.793722   77396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:19.793776   77396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:19.793788   77396 certs.go:256] generating profile certs ...
	I0828 18:22:19.793928   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.key
	I0828 18:22:19.793993   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key.131f8aa0
	I0828 18:22:19.794043   77396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key
	I0828 18:22:19.794211   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:19.794279   77396 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:19.794292   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:19.794322   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:19.794353   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:19.794379   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:19.794447   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:19.795621   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:19.831614   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:19.874281   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:19.927912   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:19.967892   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 18:22:20.010378   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:22:20.036730   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:20.064707   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:22:20.089246   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:20.116913   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:20.151729   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:20.174509   77396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:20.190911   77396 ssh_runner.go:195] Run: openssl version
	I0828 18:22:16.865253   76486 pod_ready.go:103] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:17.867833   76486 pod_ready.go:93] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.867859   76486 pod_ready.go:82] duration metric: took 3.009384484s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.867869   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.875975   76486 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.876008   76486 pod_ready.go:82] duration metric: took 8.131826ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.876022   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883334   76486 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.883363   76486 pod_ready.go:82] duration metric: took 1.007332551s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883377   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890003   76486 pod_ready.go:93] pod "kube-proxy-lmpft" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.890032   76486 pod_ready.go:82] duration metric: took 6.647273ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890045   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895629   76486 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.895658   76486 pod_ready.go:82] duration metric: took 5.60504ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895672   76486 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:16.595708   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:16.596190   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:16.596219   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:16.596152   78447 retry.go:31] will retry after 1.127921402s: waiting for machine to come up
	I0828 18:22:17.725174   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:17.725707   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:17.725736   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:17.725653   78447 retry.go:31] will retry after 959.892711ms: waiting for machine to come up
	I0828 18:22:18.686818   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:18.687269   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:18.687291   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:18.687225   78447 retry.go:31] will retry after 1.541922737s: waiting for machine to come up
	I0828 18:22:20.231099   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:20.231669   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:20.231697   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:20.231621   78447 retry.go:31] will retry after 1.601924339s: waiting for machine to come up
	I0828 18:22:20.743848   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:22.745091   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:20.198369   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:20.208787   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213735   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213798   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.219855   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:20.230970   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:20.243428   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248105   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248169   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.253803   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:20.264495   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:20.275530   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280118   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280179   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.286135   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:20.296995   77396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:20.302843   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:20.309214   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:20.314977   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:20.321177   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:20.327689   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:20.334176   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:20.340478   77396 kubeadm.go:392] StartCluster: {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:20.340589   77396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:20.340666   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.377288   77396 cri.go:89] found id: ""
	I0828 18:22:20.377366   77396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:20.387774   77396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:20.387796   77396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:20.387846   77396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:20.398086   77396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:20.399369   77396 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-131737" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:20.400118   77396 kubeconfig.go:62] /home/jenkins/minikube-integration/19529-10317/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-131737" cluster setting kubeconfig missing "old-k8s-version-131737" context setting]
	I0828 18:22:20.401248   77396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:20.464577   77396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:20.475116   77396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.99
	I0828 18:22:20.475161   77396 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:20.475172   77396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:20.475233   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.509801   77396 cri.go:89] found id: ""
	I0828 18:22:20.509881   77396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:20.527245   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:20.537526   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:20.537548   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:20.537603   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:20.546096   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:20.546168   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:20.555608   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:20.564344   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:20.564405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:20.573551   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.582191   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:20.582248   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.592105   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:20.601563   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:20.601624   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:20.612220   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:20.621113   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:20.738800   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.351223   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.564678   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.659764   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.748789   77396 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:21.748886   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.249370   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.749578   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.249982   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.749304   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.249774   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.749363   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:20.928806   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:23.402840   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:21.835332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:21.835849   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:21.835884   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:21.835787   78447 retry.go:31] will retry after 2.437330454s: waiting for machine to come up
	I0828 18:22:24.275082   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:24.275523   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:24.275553   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:24.275493   78447 retry.go:31] will retry after 2.288360059s: waiting for machine to come up
	I0828 18:22:26.564963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:26.565404   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:26.565432   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:26.565358   78447 retry.go:31] will retry after 2.911207221s: waiting for machine to come up
	I0828 18:22:25.243485   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:27.744153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:25.249675   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.749573   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.249942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.249956   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.749065   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.249309   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.749697   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.249151   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.749206   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.902220   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:28.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.402648   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:29.479385   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479953   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has current primary IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479975   75908 main.go:141] libmachine: (no-preload-072854) Found IP for machine: 192.168.61.138
	I0828 18:22:29.479988   75908 main.go:141] libmachine: (no-preload-072854) Reserving static IP address...
	I0828 18:22:29.480455   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.480476   75908 main.go:141] libmachine: (no-preload-072854) Reserved static IP address: 192.168.61.138
	I0828 18:22:29.480490   75908 main.go:141] libmachine: (no-preload-072854) DBG | skip adding static IP to network mk-no-preload-072854 - found existing host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"}
	I0828 18:22:29.480500   75908 main.go:141] libmachine: (no-preload-072854) DBG | Getting to WaitForSSH function...
	I0828 18:22:29.480509   75908 main.go:141] libmachine: (no-preload-072854) Waiting for SSH to be available...
	I0828 18:22:29.483163   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483478   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.483509   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483617   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH client type: external
	I0828 18:22:29.483636   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa (-rw-------)
	I0828 18:22:29.483673   75908 main.go:141] libmachine: (no-preload-072854) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:29.483691   75908 main.go:141] libmachine: (no-preload-072854) DBG | About to run SSH command:
	I0828 18:22:29.483705   75908 main.go:141] libmachine: (no-preload-072854) DBG | exit 0
	I0828 18:22:29.606048   75908 main.go:141] libmachine: (no-preload-072854) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:29.606410   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetConfigRaw
	I0828 18:22:29.607071   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.609374   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609733   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.609763   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609984   75908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/config.json ...
	I0828 18:22:29.610223   75908 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:29.610245   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:29.610451   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.612963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613409   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.613431   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.613688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613988   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.614165   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.614339   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.614355   75908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:29.714325   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:29.714360   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714596   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:22:29.714621   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714829   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.717545   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.717914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.717939   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.718102   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.718312   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718513   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718676   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.718848   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.719009   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.719026   75908 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-072854 && echo "no-preload-072854" | sudo tee /etc/hostname
	I0828 18:22:29.835992   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-072854
	
	I0828 18:22:29.836024   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.839134   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839621   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.839654   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839909   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.840128   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840324   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840540   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.840742   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.840973   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.841005   75908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-072854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-072854/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-072854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:29.951089   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:29.951125   75908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:29.951149   75908 buildroot.go:174] setting up certificates
	I0828 18:22:29.951162   75908 provision.go:84] configureAuth start
	I0828 18:22:29.951178   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.951496   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.954309   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954663   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.954694   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.957076   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957345   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.957365   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957550   75908 provision.go:143] copyHostCerts
	I0828 18:22:29.957606   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:29.957624   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:29.957683   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:29.957792   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:29.957807   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:29.957831   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:29.957913   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:29.957924   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:29.957951   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:29.958060   75908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.no-preload-072854 san=[127.0.0.1 192.168.61.138 localhost minikube no-preload-072854]
	I0828 18:22:30.038643   75908 provision.go:177] copyRemoteCerts
	I0828 18:22:30.038705   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:30.038730   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.041574   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.041914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.041946   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.042125   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.042306   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.042460   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.042618   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.124224   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:30.148835   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0828 18:22:30.171599   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:22:30.195349   75908 provision.go:87] duration metric: took 244.171371ms to configureAuth
	I0828 18:22:30.195375   75908 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:30.195580   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:30.195665   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.198535   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.198938   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.198961   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.199171   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.199349   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199490   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199727   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.199917   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.200104   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.200125   75908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:30.422282   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:30.422314   75908 machine.go:96] duration metric: took 812.07707ms to provisionDockerMachine
	I0828 18:22:30.422328   75908 start.go:293] postStartSetup for "no-preload-072854" (driver="kvm2")
	I0828 18:22:30.422341   75908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:30.422361   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.422658   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:30.422688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.425627   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426006   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.426047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426199   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.426401   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.426539   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.426675   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.508399   75908 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:30.512395   75908 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:30.512418   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:30.512505   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:30.512603   75908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:30.512723   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:30.522105   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:30.545166   75908 start.go:296] duration metric: took 122.822966ms for postStartSetup
	I0828 18:22:30.545203   75908 fix.go:56] duration metric: took 18.554447914s for fixHost
	I0828 18:22:30.545221   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.548255   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548658   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.548683   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548867   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.549078   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549251   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549378   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.549555   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.549774   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.549788   75908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:30.650663   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869350.622150588
	
	I0828 18:22:30.650688   75908 fix.go:216] guest clock: 1724869350.622150588
	I0828 18:22:30.650699   75908 fix.go:229] Guest: 2024-08-28 18:22:30.622150588 +0000 UTC Remote: 2024-08-28 18:22:30.545207555 +0000 UTC m=+354.015941485 (delta=76.943033ms)
	I0828 18:22:30.650723   75908 fix.go:200] guest clock delta is within tolerance: 76.943033ms
	I0828 18:22:30.650741   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 18.660017717s
	I0828 18:22:30.650770   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.651011   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:30.653715   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654110   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.654150   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654274   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.654882   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655093   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655173   75908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:30.655235   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.655319   75908 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:30.655339   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.658052   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658097   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658440   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658470   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658507   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658520   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658677   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658804   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658899   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659098   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659131   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659272   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659276   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.659426   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.769716   75908 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:30.775522   75908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:30.918471   75908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:30.924338   75908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:30.924416   75908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:30.939462   75908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:30.939489   75908 start.go:495] detecting cgroup driver to use...
	I0828 18:22:30.939589   75908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:30.956324   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:30.970243   75908 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:30.970319   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:30.983636   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:30.996989   75908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:31.116994   75908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:31.290216   75908 docker.go:233] disabling docker service ...
	I0828 18:22:31.290291   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:31.305578   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:31.318402   75908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:31.446431   75908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:31.570180   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:31.583862   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:31.602513   75908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:22:31.602577   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.613726   75908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:31.613798   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.627405   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.638648   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.648905   75908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:31.660365   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.670925   75908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.689052   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.699345   75908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:31.708691   75908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:31.708753   75908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:31.721500   75908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:31.730798   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:31.858773   75908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:31.945345   75908 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:31.945419   75908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:31.949720   75908 start.go:563] Will wait 60s for crictl version
	I0828 18:22:31.949784   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:31.953193   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:31.990360   75908 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:31.990440   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.019756   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.048117   75908 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:22:29.744207   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.243511   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.249883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:30.749652   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.249973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.249415   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.749545   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.249768   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.749104   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.249819   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.749727   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.901907   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:34.907432   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.049494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:32.052227   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052548   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:32.052585   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052800   75908 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:32.056788   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:32.068700   75908 kubeadm.go:883] updating cluster {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:32.068814   75908 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:22:32.068847   75908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:32.103085   75908 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:22:32.103111   75908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:32.103153   75908 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.103194   75908 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.103240   75908 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.103260   75908 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.103331   75908 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.103379   75908 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.103433   75908 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.103242   75908 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104775   75908 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.104806   75908 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.104829   75908 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.104777   75908 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.104781   75908 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.343173   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0828 18:22:32.343209   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.409616   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.418908   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.447831   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.453065   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.453813   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.494045   75908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0828 18:22:32.494090   75908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0828 18:22:32.494121   75908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.494122   75908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.494157   75908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0828 18:22:32.494168   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494169   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494179   75908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.494209   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546592   75908 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0828 18:22:32.546634   75908 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.546655   75908 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0828 18:22:32.546682   75908 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.546698   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546724   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546807   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.546829   75908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0828 18:22:32.546849   75908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.546880   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.546891   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546910   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.557550   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.593306   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.593328   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.648848   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.648913   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.648922   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.648973   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.704513   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.717712   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.779954   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.780015   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.780080   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.780148   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.814614   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.821580   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0828 18:22:32.821660   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.901464   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0828 18:22:32.901584   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:32.905004   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0828 18:22:32.905036   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0828 18:22:32.905102   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:32.905103   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0828 18:22:32.905144   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0828 18:22:32.905160   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905190   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905105   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:32.905191   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:32.905205   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.907869   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0828 18:22:33.324215   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292175   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.386961854s)
	I0828 18:22:35.292205   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0828 18:22:35.292234   75908 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292245   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.387114296s)
	I0828 18:22:35.292273   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0828 18:22:35.292301   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292314   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.386985678s)
	I0828 18:22:35.292354   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0828 18:22:35.292358   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.387036145s)
	I0828 18:22:35.292367   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.387143897s)
	I0828 18:22:35.292375   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0828 18:22:35.292385   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0828 18:22:35.292409   75908 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.968164241s)
	I0828 18:22:35.292446   75908 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0828 18:22:35.292456   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:35.292479   75908 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292536   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:34.243832   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:36.744323   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:35.249587   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:35.749826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.249647   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.749792   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.249845   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.249577   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.749412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.249047   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.749564   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.402943   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:39.901715   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:37.064442   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.772111922s)
	I0828 18:22:37.064476   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0828 18:22:37.064498   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.064500   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.772021571s)
	I0828 18:22:37.064529   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0828 18:22:37.064536   75908 ssh_runner.go:235] Completed: which crictl: (1.771982077s)
	I0828 18:22:37.064603   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:37.064550   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.121169   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933342   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.868675318s)
	I0828 18:22:38.933379   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0828 18:22:38.933390   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.812184072s)
	I0828 18:22:38.933486   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933400   75908 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.933543   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.983461   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0828 18:22:38.983579   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:39.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:41.243732   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:40.249307   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:40.749120   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.249107   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.749895   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.249941   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.748952   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.249788   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.749898   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.249654   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.749350   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.903470   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:44.403257   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:42.534353   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.550744503s)
	I0828 18:22:42.534392   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0828 18:22:42.534430   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600866705s)
	I0828 18:22:42.534448   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0828 18:22:42.534472   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:42.534521   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:44.602703   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.068154029s)
	I0828 18:22:44.602738   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0828 18:22:44.602765   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:44.602809   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:45.948751   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.345914789s)
	I0828 18:22:45.948794   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0828 18:22:45.948821   75908 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:45.948874   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:43.742979   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.743892   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:47.745070   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.249353   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:45.749091   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.249897   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.748991   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.249385   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.749204   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.248962   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.749853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.249574   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.749028   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.403322   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:48.902485   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:46.594343   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0828 18:22:46.594405   75908 cache_images.go:123] Successfully loaded all cached images
	I0828 18:22:46.594413   75908 cache_images.go:92] duration metric: took 14.491290737s to LoadCachedImages
	I0828 18:22:46.594428   75908 kubeadm.go:934] updating node { 192.168.61.138 8443 v1.31.0 crio true true} ...
	I0828 18:22:46.594562   75908 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-072854 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:46.594627   75908 ssh_runner.go:195] Run: crio config
	I0828 18:22:46.641210   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:46.641230   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:46.641240   75908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:46.641260   75908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-072854 NodeName:no-preload-072854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:22:46.641417   75908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-072854"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:46.641507   75908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:22:46.653042   75908 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:46.653110   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:46.671775   75908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0828 18:22:46.691485   75908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:46.707525   75908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0828 18:22:46.723642   75908 ssh_runner.go:195] Run: grep 192.168.61.138	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:46.727148   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:46.738598   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:46.877354   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:46.896287   75908 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854 for IP: 192.168.61.138
	I0828 18:22:46.896309   75908 certs.go:194] generating shared ca certs ...
	I0828 18:22:46.896324   75908 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:46.896488   75908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:46.896543   75908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:46.896578   75908 certs.go:256] generating profile certs ...
	I0828 18:22:46.896694   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/client.key
	I0828 18:22:46.896777   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key.f9122682
	I0828 18:22:46.896833   75908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key
	I0828 18:22:46.896945   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:46.896975   75908 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:46.896984   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:46.897006   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:46.897028   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:46.897050   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:46.897086   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:46.897777   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:46.940603   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:46.971255   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:47.009269   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:47.043849   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 18:22:47.081562   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 18:22:47.104248   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:47.127680   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 18:22:47.150718   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:47.171449   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:47.192814   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:47.213607   75908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:47.229589   75908 ssh_runner.go:195] Run: openssl version
	I0828 18:22:47.235107   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:47.245976   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250512   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250568   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.256305   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:47.267080   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:47.276961   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281311   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281388   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.286823   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:47.298010   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:47.309303   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313555   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313604   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.319146   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:47.329851   75908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:47.333891   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:47.339544   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:47.344883   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:47.350419   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:47.355560   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:47.360987   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:47.366392   75908 kubeadm.go:392] StartCluster: {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:47.366472   75908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:47.366518   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.407218   75908 cri.go:89] found id: ""
	I0828 18:22:47.407283   75908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:47.418518   75908 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:47.418541   75908 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:47.418599   75908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:47.429592   75908 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:47.430649   75908 kubeconfig.go:125] found "no-preload-072854" server: "https://192.168.61.138:8443"
	I0828 18:22:47.432727   75908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:47.443042   75908 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.138
	I0828 18:22:47.443072   75908 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:47.443084   75908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:47.443132   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.483840   75908 cri.go:89] found id: ""
	I0828 18:22:47.483906   75908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:47.499558   75908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:47.508932   75908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:47.508954   75908 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:47.508998   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:47.519003   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:47.519082   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:47.528248   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:47.536682   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:47.536744   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:47.545411   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.553945   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:47.554005   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.562837   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:47.571080   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:47.571141   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:47.579788   75908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:47.590221   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:47.707814   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.459935   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.669459   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.772934   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.886910   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:48.887010   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.387963   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.887167   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.923097   75908 api_server.go:72] duration metric: took 1.036200671s to wait for apiserver process to appear ...
	I0828 18:22:49.923147   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:49.923182   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:50.244153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.245033   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.835389   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:52.835424   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:52.835439   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.938497   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.938528   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:52.938541   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.943233   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.943256   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.423531   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.428654   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.428675   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.924251   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.963729   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.963759   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:54.423241   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:54.430345   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:22:54.436835   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:54.436858   75908 api_server.go:131] duration metric: took 4.513702157s to wait for apiserver health ...
	I0828 18:22:54.436867   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:54.436873   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:54.438482   75908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:50.249726   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:50.749045   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.249609   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.749060   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.249827   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.748985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.248958   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.748960   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.249581   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.749175   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.404355   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:53.904030   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:54.439656   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:54.453060   75908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:54.473537   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:54.489302   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:54.489340   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:54.489352   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:54.489369   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:54.489380   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:54.489392   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:54.489404   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:54.489414   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:54.489425   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:54.489434   75908 system_pods.go:74] duration metric: took 15.875803ms to wait for pod list to return data ...
	I0828 18:22:54.489446   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:54.494398   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:54.494428   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:54.494441   75908 node_conditions.go:105] duration metric: took 4.987547ms to run NodePressure ...
	I0828 18:22:54.494462   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:54.766427   75908 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771542   75908 kubeadm.go:739] kubelet initialised
	I0828 18:22:54.771571   75908 kubeadm.go:740] duration metric: took 5.116897ms waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771582   75908 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:54.777783   75908 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.787163   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787193   75908 pod_ready.go:82] duration metric: took 9.382038ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.787205   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787215   75908 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.791786   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791810   75908 pod_ready.go:82] duration metric: took 4.586002ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.791818   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791826   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.796201   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796220   75908 pod_ready.go:82] duration metric: took 4.388906ms for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.796228   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796234   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.877071   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877104   75908 pod_ready.go:82] duration metric: took 80.86176ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.877118   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877127   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.277179   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277206   75908 pod_ready.go:82] duration metric: took 400.069901ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.277215   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277223   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.676857   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676887   75908 pod_ready.go:82] duration metric: took 399.658558ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.676898   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676904   75908 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:56.077491   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077525   75908 pod_ready.go:82] duration metric: took 400.610612ms for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:56.077535   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077543   75908 pod_ready.go:39] duration metric: took 1.305948645s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:56.077559   75908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:56.090851   75908 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:56.090878   75908 kubeadm.go:597] duration metric: took 8.672328864s to restartPrimaryControlPlane
	I0828 18:22:56.090889   75908 kubeadm.go:394] duration metric: took 8.724501209s to StartCluster
	I0828 18:22:56.090909   75908 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.090980   75908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:56.092859   75908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.093177   75908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:56.093304   75908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:56.093391   75908 addons.go:69] Setting storage-provisioner=true in profile "no-preload-072854"
	I0828 18:22:56.093386   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:56.093415   75908 addons.go:69] Setting default-storageclass=true in profile "no-preload-072854"
	I0828 18:22:56.093472   75908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-072854"
	I0828 18:22:56.093457   75908 addons.go:69] Setting metrics-server=true in profile "no-preload-072854"
	I0828 18:22:56.093501   75908 addons.go:234] Setting addon metrics-server=true in "no-preload-072854"
	I0828 18:22:56.093429   75908 addons.go:234] Setting addon storage-provisioner=true in "no-preload-072854"
	W0828 18:22:56.093516   75908 addons.go:243] addon metrics-server should already be in state true
	W0828 18:22:56.093518   75908 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093869   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093904   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093994   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.094069   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.094796   75908 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:56.096268   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:56.110476   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0828 18:22:56.110685   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0828 18:22:56.110791   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0828 18:22:56.111030   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111183   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111453   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111592   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111603   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111710   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111720   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111820   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111839   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111892   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112043   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112214   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112402   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.112440   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112474   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.112669   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112711   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.115984   75908 addons.go:234] Setting addon default-storageclass=true in "no-preload-072854"
	W0828 18:22:56.116000   75908 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:56.116020   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.116245   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.116280   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.127848   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35747
	I0828 18:22:56.134902   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.135863   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.135892   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.136351   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.136536   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.138800   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.140837   75908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:56.142271   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:56.142290   75908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:56.142311   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.145770   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146271   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.146332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146572   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.146787   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.146958   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.147097   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.158402   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I0828 18:22:56.158948   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.159531   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.159555   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.159622   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0828 18:22:56.160033   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.160108   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.160578   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.160608   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.160864   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.160876   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.161318   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.161543   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.163449   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.165347   75908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:56.166532   75908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.166547   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:56.166564   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.170058   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170510   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.170536   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170718   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.170900   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.171055   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.171193   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.177056   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I0828 18:22:56.177458   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.177969   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.178001   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.178335   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.178537   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.180056   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.180261   75908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.180274   75908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:56.180288   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.182971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183550   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.183576   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183726   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.183879   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.184042   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.184212   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.333329   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:56.363605   75908 node_ready.go:35] waiting up to 6m0s for node "no-preload-072854" to be "Ready" ...
	I0828 18:22:56.444569   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:56.444591   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:56.466266   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:56.466288   75908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:56.472695   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.494468   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:56.494496   75908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:56.499713   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.549699   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:57.391629   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391655   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.391634   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391724   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392046   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392063   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392072   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392068   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392080   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392108   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392046   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392127   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392144   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392152   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392322   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392336   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.393780   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.393802   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.393846   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.397916   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.397937   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.398164   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.398183   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.398202   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520056   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520082   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520358   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520373   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520392   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520435   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520458   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520699   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520714   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520725   75908 addons.go:475] Verifying addon metrics-server=true in "no-preload-072854"
	I0828 18:22:57.522537   75908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:54.742708   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:56.744595   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:55.248933   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:55.749502   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.249976   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.749648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.249544   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.749769   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.249492   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.749787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.249693   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.749781   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.402039   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:58.901738   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:57.523745   75908 addons.go:510] duration metric: took 1.430442724s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:58.367342   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:00.867911   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:59.243496   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:01.244209   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:00.249249   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.749724   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.248973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.748932   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.249474   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.749966   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.249404   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.248943   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.749828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.902675   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:03.402001   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:02.868286   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:03.367260   75908 node_ready.go:49] node "no-preload-072854" has status "Ready":"True"
	I0828 18:23:03.367286   75908 node_ready.go:38] duration metric: took 7.003649083s for node "no-preload-072854" to be "Ready" ...
	I0828 18:23:03.367296   75908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:23:03.372211   75908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376919   75908 pod_ready.go:93] pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.376944   75908 pod_ready.go:82] duration metric: took 4.710919ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376954   75908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381043   75908 pod_ready.go:93] pod "etcd-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.381066   75908 pod_ready.go:82] duration metric: took 4.10571ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381078   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:05.388413   75908 pod_ready.go:103] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.387040   75908 pod_ready.go:93] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.387060   75908 pod_ready.go:82] duration metric: took 3.005974723s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.387070   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391257   75908 pod_ready.go:93] pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.391276   75908 pod_ready.go:82] duration metric: took 4.19923ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391285   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396819   75908 pod_ready.go:93] pod "kube-proxy-tfxfd" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.396836   75908 pod_ready.go:82] duration metric: took 5.545346ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396845   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.743752   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.242657   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.243781   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:05.249882   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.749888   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.249648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.749518   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.249032   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.249738   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.749748   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.249670   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.749246   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.906344   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.401488   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.402915   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.568922   75908 pod_ready.go:93] pod "kube-scheduler-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.568948   75908 pod_ready.go:82] duration metric: took 172.096644ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.568964   75908 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:08.574813   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.576583   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.743641   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.243152   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.249340   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:10.749798   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.249721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.249779   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.249760   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.749029   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.249441   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.749641   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.903188   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.401514   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.076559   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.575593   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.742772   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.743273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.249678   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:15.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.249786   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.748968   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.249139   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.749721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.249749   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.749731   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.249576   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.749644   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.402418   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.902446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.575692   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.576073   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.744432   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.243417   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:20.249682   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:20.748965   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.249378   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.749011   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:21.749077   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:21.783557   77396 cri.go:89] found id: ""
	I0828 18:23:21.783581   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.783592   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:21.783600   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:21.783667   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:21.816332   77396 cri.go:89] found id: ""
	I0828 18:23:21.816366   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.816377   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:21.816385   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:21.816451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:21.850130   77396 cri.go:89] found id: ""
	I0828 18:23:21.850157   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.850168   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:21.850175   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:21.850240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:21.887000   77396 cri.go:89] found id: ""
	I0828 18:23:21.887028   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.887037   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:21.887045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:21.887106   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:21.922052   77396 cri.go:89] found id: ""
	I0828 18:23:21.922095   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.922106   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:21.922114   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:21.922169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:21.968838   77396 cri.go:89] found id: ""
	I0828 18:23:21.968865   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.968872   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:21.968879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:21.968937   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:22.005361   77396 cri.go:89] found id: ""
	I0828 18:23:22.005387   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.005397   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:22.005404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:22.005465   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:22.043999   77396 cri.go:89] found id: ""
	I0828 18:23:22.044026   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.044034   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:22.044042   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:22.044054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:22.092612   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:22.092641   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:22.105847   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:22.105870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:22.230236   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:22.230254   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:22.230267   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:22.305648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:22.305712   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:24.843524   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:24.856321   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:24.856412   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:24.891356   77396 cri.go:89] found id: ""
	I0828 18:23:24.891395   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.891406   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:24.891414   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:24.891476   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:24.923476   77396 cri.go:89] found id: ""
	I0828 18:23:24.923504   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.923515   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:24.923522   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:24.923583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:24.955453   77396 cri.go:89] found id: ""
	I0828 18:23:24.955482   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.955493   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:24.955499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:24.955564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:24.991349   77396 cri.go:89] found id: ""
	I0828 18:23:24.991377   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.991384   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:24.991394   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:24.991448   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:25.026464   77396 cri.go:89] found id: ""
	I0828 18:23:25.026493   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.026501   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:25.026508   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:25.026559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:25.066989   77396 cri.go:89] found id: ""
	I0828 18:23:25.067021   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.067045   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:25.067053   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:25.067123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:25.111327   77396 cri.go:89] found id: ""
	I0828 18:23:25.111358   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.111369   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:25.111377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:25.111442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:25.159672   77396 cri.go:89] found id: ""
	I0828 18:23:25.159698   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.159707   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:25.159715   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:25.159726   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:21.902745   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.075480   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.575344   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.743311   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.743442   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:25.216755   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:25.216788   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:25.230365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:25.230399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:25.303227   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:25.303253   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:25.303276   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:25.378467   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:25.378501   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:27.915420   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:27.927659   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:27.927726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:27.961535   77396 cri.go:89] found id: ""
	I0828 18:23:27.961560   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.961568   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:27.961573   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:27.961618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:27.993707   77396 cri.go:89] found id: ""
	I0828 18:23:27.993732   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.993739   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:27.993745   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:27.993792   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:28.027410   77396 cri.go:89] found id: ""
	I0828 18:23:28.027438   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.027445   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:28.027451   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:28.027509   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:28.063874   77396 cri.go:89] found id: ""
	I0828 18:23:28.063909   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.063918   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:28.063924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:28.063974   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:28.096726   77396 cri.go:89] found id: ""
	I0828 18:23:28.096755   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.096763   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:28.096769   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:28.096826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:28.129538   77396 cri.go:89] found id: ""
	I0828 18:23:28.129562   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.129570   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:28.129576   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:28.129633   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:28.167785   77396 cri.go:89] found id: ""
	I0828 18:23:28.167813   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.167821   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:28.167827   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:28.167881   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:28.200417   77396 cri.go:89] found id: ""
	I0828 18:23:28.200445   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.200456   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:28.200467   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:28.200481   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:28.214025   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:28.214054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:28.280106   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:28.280126   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:28.280139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:28.359834   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:28.359875   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:28.399997   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:28.400028   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:26.902287   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.403446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.576035   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.075134   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.080674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:28.744552   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.243825   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:30.950870   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:30.967367   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:30.967426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:31.007843   77396 cri.go:89] found id: ""
	I0828 18:23:31.007873   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.007882   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:31.007890   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:31.007949   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:31.056710   77396 cri.go:89] found id: ""
	I0828 18:23:31.056744   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.056756   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:31.056764   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:31.056824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:31.101177   77396 cri.go:89] found id: ""
	I0828 18:23:31.101208   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.101218   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:31.101225   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:31.101283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:31.135513   77396 cri.go:89] found id: ""
	I0828 18:23:31.135548   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.135560   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:31.135568   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:31.135635   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:31.172887   77396 cri.go:89] found id: ""
	I0828 18:23:31.172921   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.172932   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:31.172939   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:31.173006   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:31.207744   77396 cri.go:89] found id: ""
	I0828 18:23:31.207775   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.207788   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:31.207795   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:31.207873   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:31.242954   77396 cri.go:89] found id: ""
	I0828 18:23:31.242984   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.242995   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:31.243003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:31.243063   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:31.277382   77396 cri.go:89] found id: ""
	I0828 18:23:31.277418   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.277427   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:31.277436   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:31.277448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.315688   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:31.315722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:31.367565   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:31.367596   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:31.380803   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:31.380839   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:31.447184   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:31.447214   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:31.447229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.022521   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:34.036551   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:34.036615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:34.074735   77396 cri.go:89] found id: ""
	I0828 18:23:34.074763   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.074772   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:34.074780   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:34.074836   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:34.113604   77396 cri.go:89] found id: ""
	I0828 18:23:34.113631   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.113642   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:34.113649   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:34.113711   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:34.152658   77396 cri.go:89] found id: ""
	I0828 18:23:34.152687   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.152701   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:34.152707   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:34.152753   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:34.188748   77396 cri.go:89] found id: ""
	I0828 18:23:34.188775   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.188784   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:34.188789   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:34.188847   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:34.221553   77396 cri.go:89] found id: ""
	I0828 18:23:34.221584   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.221595   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:34.221602   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:34.221666   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:34.257809   77396 cri.go:89] found id: ""
	I0828 18:23:34.257833   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.257843   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:34.257850   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:34.257935   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:34.291217   77396 cri.go:89] found id: ""
	I0828 18:23:34.291246   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.291253   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:34.291261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:34.291327   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:34.324084   77396 cri.go:89] found id: ""
	I0828 18:23:34.324114   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.324122   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:34.324133   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:34.324147   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:34.373802   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:34.373838   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:34.386779   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:34.386807   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:34.457396   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:34.457413   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:34.457428   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.531549   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:34.531590   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.901633   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:34.402475   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.576038   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:36.075226   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:35.743297   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.744669   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.068985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:37.083317   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:37.083383   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:37.117109   77396 cri.go:89] found id: ""
	I0828 18:23:37.117144   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.117156   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:37.117164   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:37.117225   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:37.150151   77396 cri.go:89] found id: ""
	I0828 18:23:37.150180   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.150189   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:37.150194   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:37.150249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:37.184263   77396 cri.go:89] found id: ""
	I0828 18:23:37.184289   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.184298   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:37.184303   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:37.184358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:37.214442   77396 cri.go:89] found id: ""
	I0828 18:23:37.214468   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.214476   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:37.214481   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:37.214545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:37.251690   77396 cri.go:89] found id: ""
	I0828 18:23:37.251723   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.251732   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:37.251738   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:37.251790   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:37.286900   77396 cri.go:89] found id: ""
	I0828 18:23:37.286929   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.286939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:37.286946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:37.287026   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:37.324010   77396 cri.go:89] found id: ""
	I0828 18:23:37.324039   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.324049   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:37.324057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:37.324114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:37.359723   77396 cri.go:89] found id: ""
	I0828 18:23:37.359777   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.359785   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:37.359813   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:37.359829   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:37.411363   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:37.411395   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:37.425078   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:37.425108   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:37.498351   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:37.498374   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:37.498399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:37.580149   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:37.580187   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:40.119822   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:40.134555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:40.134613   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:40.173129   77396 cri.go:89] found id: ""
	I0828 18:23:40.173156   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.173164   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:40.173170   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:40.173218   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:36.902004   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:39.401256   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:38.575639   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.575835   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.243909   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.743492   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.205445   77396 cri.go:89] found id: ""
	I0828 18:23:40.205470   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.205477   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:40.205482   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:40.205536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:40.237018   77396 cri.go:89] found id: ""
	I0828 18:23:40.237046   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.237057   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:40.237064   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:40.237124   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:40.271188   77396 cri.go:89] found id: ""
	I0828 18:23:40.271220   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.271232   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:40.271239   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:40.271302   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:40.304532   77396 cri.go:89] found id: ""
	I0828 18:23:40.304566   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.304577   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:40.304585   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:40.304652   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:40.338114   77396 cri.go:89] found id: ""
	I0828 18:23:40.338145   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.338156   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:40.338165   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:40.338227   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:40.370126   77396 cri.go:89] found id: ""
	I0828 18:23:40.370160   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.370176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:40.370184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:40.370247   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:40.406139   77396 cri.go:89] found id: ""
	I0828 18:23:40.406167   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.406176   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:40.406186   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:40.406201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:40.459364   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:40.459404   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:40.472467   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:40.472496   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:40.546389   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:40.546420   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:40.546438   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:40.628550   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:40.628586   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:43.170210   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:43.183441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:43.183516   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:43.215798   77396 cri.go:89] found id: ""
	I0828 18:23:43.215823   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.215834   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:43.215841   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:43.215905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:43.250001   77396 cri.go:89] found id: ""
	I0828 18:23:43.250027   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.250035   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:43.250041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:43.250110   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:43.284621   77396 cri.go:89] found id: ""
	I0828 18:23:43.284654   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.284662   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:43.284668   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:43.284716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:43.318780   77396 cri.go:89] found id: ""
	I0828 18:23:43.318805   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.318815   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:43.318821   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:43.318866   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:43.351788   77396 cri.go:89] found id: ""
	I0828 18:23:43.351810   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.351818   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:43.351823   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:43.351872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:43.388719   77396 cri.go:89] found id: ""
	I0828 18:23:43.388745   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.388755   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:43.388761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:43.388810   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:43.423250   77396 cri.go:89] found id: ""
	I0828 18:23:43.423273   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.423283   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:43.423290   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:43.423376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:43.464644   77396 cri.go:89] found id: ""
	I0828 18:23:43.464672   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.464683   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:43.464693   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:43.464708   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:43.517422   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:43.517457   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:43.530317   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:43.530342   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:43.599776   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:43.599795   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:43.599806   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:43.679377   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:43.679409   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:41.401619   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:43.403142   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.576264   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.076333   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.242626   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.243310   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:46.215985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:46.229564   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:46.229632   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:46.267425   77396 cri.go:89] found id: ""
	I0828 18:23:46.267453   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.267464   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:46.267472   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:46.267534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:46.302532   77396 cri.go:89] found id: ""
	I0828 18:23:46.302562   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.302573   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:46.302580   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:46.302645   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:46.338197   77396 cri.go:89] found id: ""
	I0828 18:23:46.338226   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.338237   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:46.338244   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:46.338305   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:46.371503   77396 cri.go:89] found id: ""
	I0828 18:23:46.371528   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.371535   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:46.371542   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:46.371606   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:46.406364   77396 cri.go:89] found id: ""
	I0828 18:23:46.406386   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.406399   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:46.406405   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:46.406451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:46.441519   77396 cri.go:89] found id: ""
	I0828 18:23:46.441547   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.441557   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:46.441565   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:46.441626   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:46.475413   77396 cri.go:89] found id: ""
	I0828 18:23:46.475445   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.475455   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:46.475465   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:46.475531   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:46.508722   77396 cri.go:89] found id: ""
	I0828 18:23:46.508752   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.508762   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:46.508772   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:46.508790   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:46.564737   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:46.564776   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:46.578833   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:46.578860   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:46.649533   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:46.649554   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:46.649566   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:46.725738   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:46.725780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.263052   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:49.275342   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:49.275403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:49.310092   77396 cri.go:89] found id: ""
	I0828 18:23:49.310121   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.310131   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:49.310138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:49.310200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:49.347624   77396 cri.go:89] found id: ""
	I0828 18:23:49.347649   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.347657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:49.347662   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:49.347708   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:49.383801   77396 cri.go:89] found id: ""
	I0828 18:23:49.383827   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.383834   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:49.383840   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:49.383889   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:49.420443   77396 cri.go:89] found id: ""
	I0828 18:23:49.420470   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.420478   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:49.420484   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:49.420536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:49.452225   77396 cri.go:89] found id: ""
	I0828 18:23:49.452247   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.452255   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:49.452260   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:49.452306   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:49.486137   77396 cri.go:89] found id: ""
	I0828 18:23:49.486164   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.486172   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:49.486178   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:49.486224   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:49.519081   77396 cri.go:89] found id: ""
	I0828 18:23:49.519115   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.519126   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:49.519137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:49.519199   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:49.552903   77396 cri.go:89] found id: ""
	I0828 18:23:49.552932   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.552940   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:49.552948   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:49.552962   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:49.623963   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:49.624000   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:49.624023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:49.700684   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:49.700722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.738241   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:49.738265   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:49.786941   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:49.786976   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:45.901814   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.903106   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.905017   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.575690   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.576689   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.243535   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:51.243843   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:53.244097   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.300380   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:52.314281   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:52.314347   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:52.348497   77396 cri.go:89] found id: ""
	I0828 18:23:52.348522   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.348532   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:52.348539   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:52.348605   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:52.382060   77396 cri.go:89] found id: ""
	I0828 18:23:52.382107   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.382119   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:52.382127   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:52.382242   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:52.414306   77396 cri.go:89] found id: ""
	I0828 18:23:52.414335   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.414348   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:52.414356   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:52.414424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:52.448965   77396 cri.go:89] found id: ""
	I0828 18:23:52.448995   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.449005   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:52.449012   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:52.449079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:52.479102   77396 cri.go:89] found id: ""
	I0828 18:23:52.479129   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.479140   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:52.479148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:52.479213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:52.510025   77396 cri.go:89] found id: ""
	I0828 18:23:52.510051   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.510061   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:52.510068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:52.510171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:52.544472   77396 cri.go:89] found id: ""
	I0828 18:23:52.544501   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.544510   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:52.544517   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:52.544584   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:52.579962   77396 cri.go:89] found id: ""
	I0828 18:23:52.579986   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.579993   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:52.580000   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:52.580015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:52.631775   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:52.631809   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:52.645200   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:52.645230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:52.709318   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:52.709341   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:52.709355   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:52.788797   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:52.788834   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:52.402059   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.901750   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.075625   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.076533   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.743325   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.242726   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.324787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:55.338003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:55.338109   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:55.371733   77396 cri.go:89] found id: ""
	I0828 18:23:55.371757   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.371764   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:55.371770   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:55.371818   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:55.407922   77396 cri.go:89] found id: ""
	I0828 18:23:55.407944   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.407951   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:55.407957   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:55.408009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:55.443667   77396 cri.go:89] found id: ""
	I0828 18:23:55.443693   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.443700   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:55.443706   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:55.443761   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:55.478692   77396 cri.go:89] found id: ""
	I0828 18:23:55.478725   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.478735   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:55.478742   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:55.478804   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:55.512495   77396 cri.go:89] found id: ""
	I0828 18:23:55.512517   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.512525   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:55.512530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:55.512583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:55.546363   77396 cri.go:89] found id: ""
	I0828 18:23:55.546404   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.546415   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:55.546423   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:55.546478   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:55.579505   77396 cri.go:89] found id: ""
	I0828 18:23:55.579526   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.579533   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:55.579539   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:55.579588   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:55.610588   77396 cri.go:89] found id: ""
	I0828 18:23:55.610612   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.610628   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:55.610648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:55.610659   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:55.647289   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:55.647313   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:55.696660   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:55.696699   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:55.709215   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:55.709242   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:55.781755   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:55.781773   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:55.781786   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.359553   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:58.371960   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:58.372034   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:58.404455   77396 cri.go:89] found id: ""
	I0828 18:23:58.404481   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.404488   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:58.404494   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:58.404545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:58.436955   77396 cri.go:89] found id: ""
	I0828 18:23:58.436979   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.436989   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:58.436996   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:58.437055   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:58.467985   77396 cri.go:89] found id: ""
	I0828 18:23:58.468011   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.468021   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:58.468028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:58.468085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:58.500356   77396 cri.go:89] found id: ""
	I0828 18:23:58.500390   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.500398   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:58.500404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:58.500469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:58.538445   77396 cri.go:89] found id: ""
	I0828 18:23:58.538469   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.538477   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:58.538483   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:58.538541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:58.577827   77396 cri.go:89] found id: ""
	I0828 18:23:58.577851   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.577859   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:58.577867   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:58.577932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:58.611863   77396 cri.go:89] found id: ""
	I0828 18:23:58.611891   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.611902   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:58.611909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:58.611973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:58.646133   77396 cri.go:89] found id: ""
	I0828 18:23:58.646165   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.646175   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:58.646187   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:58.646204   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:58.659103   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:58.659134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:58.725271   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:58.725292   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:58.725310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.807171   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:58.807218   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:58.848245   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:58.848273   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:56.902329   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.902824   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:56.575727   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.576160   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.075851   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:00.243273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:02.247987   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.402171   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:01.415498   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:01.415574   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:01.449314   77396 cri.go:89] found id: ""
	I0828 18:24:01.449347   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.449355   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:01.449362   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:01.449425   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:01.485354   77396 cri.go:89] found id: ""
	I0828 18:24:01.485381   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.485388   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:01.485395   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:01.485439   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:01.518106   77396 cri.go:89] found id: ""
	I0828 18:24:01.518132   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.518139   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:01.518145   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:01.518191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:01.551298   77396 cri.go:89] found id: ""
	I0828 18:24:01.551329   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.551340   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:01.551348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:01.551406   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:01.587074   77396 cri.go:89] found id: ""
	I0828 18:24:01.587100   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.587107   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:01.587112   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:01.587158   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:01.619482   77396 cri.go:89] found id: ""
	I0828 18:24:01.619510   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.619518   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:01.619523   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:01.619575   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:01.651938   77396 cri.go:89] found id: ""
	I0828 18:24:01.651965   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.651972   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:01.651978   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:01.652039   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:01.685390   77396 cri.go:89] found id: ""
	I0828 18:24:01.685419   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.685429   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:01.685437   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:01.685448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.723631   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:01.723656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:01.777387   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:01.777422   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:01.793748   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:01.793781   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:01.857869   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:01.857901   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:01.857915   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.434883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:04.447876   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:04.447953   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:04.480730   77396 cri.go:89] found id: ""
	I0828 18:24:04.480762   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.480774   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:04.480781   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:04.480841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:04.514621   77396 cri.go:89] found id: ""
	I0828 18:24:04.514647   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.514657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:04.514664   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:04.514722   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:04.552044   77396 cri.go:89] found id: ""
	I0828 18:24:04.552071   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.552083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:04.552090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:04.552151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:04.587402   77396 cri.go:89] found id: ""
	I0828 18:24:04.587427   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.587440   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:04.587446   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:04.587506   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:04.619299   77396 cri.go:89] found id: ""
	I0828 18:24:04.619329   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.619337   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:04.619343   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:04.619393   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:04.659363   77396 cri.go:89] found id: ""
	I0828 18:24:04.659391   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.659399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:04.659408   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:04.659469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:04.691997   77396 cri.go:89] found id: ""
	I0828 18:24:04.692022   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.692030   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:04.692035   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:04.692089   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:04.725162   77396 cri.go:89] found id: ""
	I0828 18:24:04.725188   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.725196   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:04.725204   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:04.725215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:04.778072   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:04.778112   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:04.792571   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:04.792604   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:04.863074   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:04.863096   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:04.863107   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.958480   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:04.958516   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.401445   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.402916   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.575667   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:05.576444   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:04.744216   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.243680   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.498048   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:07.511286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:07.511350   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:07.554880   77396 cri.go:89] found id: ""
	I0828 18:24:07.554910   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.554921   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:07.554929   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:07.554990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:07.590593   77396 cri.go:89] found id: ""
	I0828 18:24:07.590621   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.590631   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:07.590641   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:07.590706   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:07.624067   77396 cri.go:89] found id: ""
	I0828 18:24:07.624096   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.624107   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:07.624113   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:07.624169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:07.657241   77396 cri.go:89] found id: ""
	I0828 18:24:07.657269   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.657277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:07.657282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:07.657341   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:07.702308   77396 cri.go:89] found id: ""
	I0828 18:24:07.702358   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.702368   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:07.702375   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:07.702438   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:07.736409   77396 cri.go:89] found id: ""
	I0828 18:24:07.736446   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.736454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:07.736459   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:07.736527   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:07.771001   77396 cri.go:89] found id: ""
	I0828 18:24:07.771029   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.771037   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:07.771043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:07.771090   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:07.807061   77396 cri.go:89] found id: ""
	I0828 18:24:07.807089   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.807099   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:07.807111   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:07.807125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:07.885254   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:07.885293   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:07.926920   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:07.926948   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:07.980485   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:07.980524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:07.994512   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:07.994545   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:08.071058   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:05.901817   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.902547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.402041   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.576656   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.077246   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:09.244155   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:11.743283   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.571233   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:10.586227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:10.586298   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:10.623971   77396 cri.go:89] found id: ""
	I0828 18:24:10.623997   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.624006   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:10.624014   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:10.624074   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:10.675472   77396 cri.go:89] found id: ""
	I0828 18:24:10.675506   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.675518   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:10.675526   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:10.675599   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:10.707885   77396 cri.go:89] found id: ""
	I0828 18:24:10.707913   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.707922   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:10.707931   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:10.707991   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:10.740896   77396 cri.go:89] found id: ""
	I0828 18:24:10.740924   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.740934   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:10.740942   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:10.741058   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:10.776125   77396 cri.go:89] found id: ""
	I0828 18:24:10.776155   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.776167   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:10.776174   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:10.776234   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:10.814024   77396 cri.go:89] found id: ""
	I0828 18:24:10.814053   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.814062   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:10.814068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:10.814132   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:10.851380   77396 cri.go:89] found id: ""
	I0828 18:24:10.851404   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.851412   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:10.851418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:10.851479   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:10.888162   77396 cri.go:89] found id: ""
	I0828 18:24:10.888193   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.888204   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:10.888215   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:10.888229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:10.938481   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:10.938520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:10.952841   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:10.952870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:11.020956   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:11.020982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:11.020997   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:11.101883   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:11.101920   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:13.642878   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:13.657098   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:13.657172   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:13.695651   77396 cri.go:89] found id: ""
	I0828 18:24:13.695686   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.695694   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:13.695699   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:13.695747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:13.732419   77396 cri.go:89] found id: ""
	I0828 18:24:13.732452   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.732465   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:13.732473   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:13.732523   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:13.770052   77396 cri.go:89] found id: ""
	I0828 18:24:13.770090   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.770099   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:13.770104   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:13.770157   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:13.807955   77396 cri.go:89] found id: ""
	I0828 18:24:13.807980   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.807988   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:13.807993   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:13.808045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:13.849535   77396 cri.go:89] found id: ""
	I0828 18:24:13.849559   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.849566   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:13.849571   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:13.849621   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:13.889078   77396 cri.go:89] found id: ""
	I0828 18:24:13.889105   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.889114   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:13.889122   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:13.889177   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:13.924998   77396 cri.go:89] found id: ""
	I0828 18:24:13.925030   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.925040   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:13.925046   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:13.925095   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:13.962794   77396 cri.go:89] found id: ""
	I0828 18:24:13.962824   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.962835   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:13.962843   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:13.962854   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:14.016213   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:14.016260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:14.030089   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:14.030119   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:14.101102   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:14.101121   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:14.101134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:14.179243   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:14.179283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:12.903671   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:15.401472   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:12.575572   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:14.575994   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:13.743881   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.243453   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.725412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:16.738387   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:16.738459   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:16.773934   77396 cri.go:89] found id: ""
	I0828 18:24:16.773960   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.773967   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:16.773973   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:16.774022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:16.807374   77396 cri.go:89] found id: ""
	I0828 18:24:16.807402   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.807412   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:16.807418   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:16.807468   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:16.841569   77396 cri.go:89] found id: ""
	I0828 18:24:16.841595   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.841605   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:16.841613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:16.841673   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:16.877225   77396 cri.go:89] found id: ""
	I0828 18:24:16.877247   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.877255   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:16.877261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:16.877321   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:16.911357   77396 cri.go:89] found id: ""
	I0828 18:24:16.911385   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.911395   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:16.911402   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:16.911458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:16.955061   77396 cri.go:89] found id: ""
	I0828 18:24:16.955087   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.955095   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:16.955103   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:16.955156   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:16.989851   77396 cri.go:89] found id: ""
	I0828 18:24:16.989887   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.989900   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:16.989906   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:16.989966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:17.023974   77396 cri.go:89] found id: ""
	I0828 18:24:17.024005   77396 logs.go:276] 0 containers: []
	W0828 18:24:17.024016   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:17.024024   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:17.024036   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:17.085245   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:17.085279   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:17.100181   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:17.100211   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:17.185406   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:17.185426   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:17.185437   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:17.266980   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:17.267020   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:19.808568   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:19.823365   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:19.823432   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:19.859428   77396 cri.go:89] found id: ""
	I0828 18:24:19.859451   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.859459   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:19.859464   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:19.859518   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:19.895152   77396 cri.go:89] found id: ""
	I0828 18:24:19.895176   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.895186   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:19.895202   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:19.895263   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:19.935775   77396 cri.go:89] found id: ""
	I0828 18:24:19.935806   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.935815   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:19.935828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:19.935893   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:19.969484   77396 cri.go:89] found id: ""
	I0828 18:24:19.969518   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.969528   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:19.969534   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:19.969615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:20.002893   77396 cri.go:89] found id: ""
	I0828 18:24:20.002935   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.002947   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:20.002955   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:20.003041   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:20.034641   77396 cri.go:89] found id: ""
	I0828 18:24:20.034668   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.034678   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:20.034686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:20.034750   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:20.064580   77396 cri.go:89] found id: ""
	I0828 18:24:20.064609   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.064620   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:20.064627   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:20.064710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:20.109306   77396 cri.go:89] found id: ""
	I0828 18:24:20.109348   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.109360   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:20.109371   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:20.109390   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:20.160179   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:20.160213   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:20.172953   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:20.172982   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:24:17.402222   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.402389   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:17.076219   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.575317   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:18.742920   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:21.243791   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:24:20.245855   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:20.245879   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:20.245894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:20.333372   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:20.333430   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:22.870985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:22.886333   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:22.886403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:22.923248   77396 cri.go:89] found id: ""
	I0828 18:24:22.923278   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.923290   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:22.923298   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:22.923362   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:22.961720   77396 cri.go:89] found id: ""
	I0828 18:24:22.961747   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.961758   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:22.961767   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:22.961826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:22.996416   77396 cri.go:89] found id: ""
	I0828 18:24:22.996451   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.996461   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:22.996469   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:22.996534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:23.031328   77396 cri.go:89] found id: ""
	I0828 18:24:23.031354   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.031365   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:23.031373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:23.031442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:23.062790   77396 cri.go:89] found id: ""
	I0828 18:24:23.062818   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.062828   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:23.062836   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:23.062900   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:23.095783   77396 cri.go:89] found id: ""
	I0828 18:24:23.095811   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.095822   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:23.095829   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:23.095887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:23.128950   77396 cri.go:89] found id: ""
	I0828 18:24:23.128976   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.128984   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:23.128989   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:23.129035   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:23.161040   77396 cri.go:89] found id: ""
	I0828 18:24:23.161070   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.161081   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:23.161093   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:23.161109   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:23.209200   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:23.209232   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:23.222326   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:23.222369   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:23.294157   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:23.294223   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:23.294235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:23.371364   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:23.371399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:21.902165   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.902593   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:22.075187   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:24.076034   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.743186   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.245507   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.248023   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:25.911853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:25.924909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:25.925042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:25.958257   77396 cri.go:89] found id: ""
	I0828 18:24:25.958286   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.958294   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:25.958300   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:25.958380   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:25.991284   77396 cri.go:89] found id: ""
	I0828 18:24:25.991312   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.991320   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:25.991325   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:25.991373   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:26.023932   77396 cri.go:89] found id: ""
	I0828 18:24:26.023963   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.023974   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:26.023981   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:26.024042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:26.055233   77396 cri.go:89] found id: ""
	I0828 18:24:26.055264   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.055274   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:26.055282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:26.055342   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:26.091307   77396 cri.go:89] found id: ""
	I0828 18:24:26.091334   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.091345   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:26.091353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:26.091403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:26.123887   77396 cri.go:89] found id: ""
	I0828 18:24:26.123919   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.123929   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:26.123943   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:26.124004   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:26.156028   77396 cri.go:89] found id: ""
	I0828 18:24:26.156055   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.156063   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:26.156068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:26.156129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:26.186952   77396 cri.go:89] found id: ""
	I0828 18:24:26.186981   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.186989   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:26.186998   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:26.187008   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:26.234021   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:26.234065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:26.249052   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:26.249079   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:26.323382   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:26.323406   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:26.323421   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:26.408279   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:26.408306   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:28.950242   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:28.964886   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:28.964973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:28.999657   77396 cri.go:89] found id: ""
	I0828 18:24:28.999686   77396 logs.go:276] 0 containers: []
	W0828 18:24:28.999695   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:28.999701   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:28.999759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:29.036649   77396 cri.go:89] found id: ""
	I0828 18:24:29.036682   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.036691   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:29.036697   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:29.036758   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:29.071048   77396 cri.go:89] found id: ""
	I0828 18:24:29.071073   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.071083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:29.071090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:29.071149   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:29.106377   77396 cri.go:89] found id: ""
	I0828 18:24:29.106412   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.106423   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:29.106430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:29.106494   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:29.141150   77396 cri.go:89] found id: ""
	I0828 18:24:29.141183   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.141192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:29.141198   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:29.141261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:29.175977   77396 cri.go:89] found id: ""
	I0828 18:24:29.176007   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.176015   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:29.176022   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:29.176085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:29.209684   77396 cri.go:89] found id: ""
	I0828 18:24:29.209714   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.209725   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:29.209732   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:29.209791   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:29.244105   77396 cri.go:89] found id: ""
	I0828 18:24:29.244133   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.244143   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:29.244153   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:29.244168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:29.304288   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:29.304326   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:29.319606   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:29.319636   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:29.389101   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:29.389123   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:29.389135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:29.474129   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:29.474168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:26.401494   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.402117   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.402503   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.574724   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.575806   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:31.075079   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.743295   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.743355   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.018867   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:32.032399   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:32.032467   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:32.066994   77396 cri.go:89] found id: ""
	I0828 18:24:32.067023   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.067032   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:32.067038   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:32.067094   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:32.102133   77396 cri.go:89] found id: ""
	I0828 18:24:32.102164   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.102176   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:32.102183   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:32.102237   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:32.136427   77396 cri.go:89] found id: ""
	I0828 18:24:32.136450   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.136457   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:32.136463   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:32.136514   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.169993   77396 cri.go:89] found id: ""
	I0828 18:24:32.170026   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.170034   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:32.170040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:32.170114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:32.202191   77396 cri.go:89] found id: ""
	I0828 18:24:32.202218   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.202229   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:32.202236   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:32.202297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:32.241866   77396 cri.go:89] found id: ""
	I0828 18:24:32.241890   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.241900   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:32.241908   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:32.241980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:32.275919   77396 cri.go:89] found id: ""
	I0828 18:24:32.275949   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.275965   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:32.275972   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:32.276033   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:32.310958   77396 cri.go:89] found id: ""
	I0828 18:24:32.310991   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.311002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:32.311010   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:32.311023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:32.367619   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:32.367665   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:32.380676   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:32.380707   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:32.445626   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:32.445650   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:32.445668   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:32.528458   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:32.528493   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:35.070182   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:35.084599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:35.084707   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:35.120542   77396 cri.go:89] found id: ""
	I0828 18:24:35.120568   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.120578   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:35.120585   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:35.120644   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:35.159336   77396 cri.go:89] found id: ""
	I0828 18:24:35.159361   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.159372   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:35.159378   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:35.159445   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:35.197161   77396 cri.go:89] found id: ""
	I0828 18:24:35.197185   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.197196   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:35.197203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:35.197267   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.903836   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.401184   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:33.574441   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.574602   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.244147   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.744307   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.233507   77396 cri.go:89] found id: ""
	I0828 18:24:35.233533   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.233542   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:35.233548   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:35.233609   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:35.270403   77396 cri.go:89] found id: ""
	I0828 18:24:35.270440   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.270448   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:35.270454   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:35.270503   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:35.304119   77396 cri.go:89] found id: ""
	I0828 18:24:35.304141   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.304149   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:35.304155   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:35.304223   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:35.341477   77396 cri.go:89] found id: ""
	I0828 18:24:35.341507   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.341518   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:35.341525   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:35.341589   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:35.374180   77396 cri.go:89] found id: ""
	I0828 18:24:35.374207   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.374215   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:35.374224   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:35.374235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:35.428008   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:35.428041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:35.443131   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:35.443159   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:35.515296   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:35.515318   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:35.515332   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:35.590734   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:35.590765   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.129856   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:38.143354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:38.143413   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:38.174964   77396 cri.go:89] found id: ""
	I0828 18:24:38.174993   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.175004   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:38.175011   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:38.175083   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:38.211424   77396 cri.go:89] found id: ""
	I0828 18:24:38.211460   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.211471   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:38.211477   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:38.211533   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:38.244667   77396 cri.go:89] found id: ""
	I0828 18:24:38.244697   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.244712   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:38.244719   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:38.244779   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:38.277930   77396 cri.go:89] found id: ""
	I0828 18:24:38.277955   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.277963   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:38.277969   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:38.278020   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:38.311374   77396 cri.go:89] found id: ""
	I0828 18:24:38.311403   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.311413   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:38.311420   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:38.311477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:38.345467   77396 cri.go:89] found id: ""
	I0828 18:24:38.345496   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.345507   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:38.345515   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:38.345576   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:38.377554   77396 cri.go:89] found id: ""
	I0828 18:24:38.377584   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.377595   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:38.377613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:38.377675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:38.410101   77396 cri.go:89] found id: ""
	I0828 18:24:38.410132   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.410142   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:38.410151   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:38.410165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:38.422496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:38.422523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:38.486692   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:38.486715   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:38.486728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:38.567295   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:38.567331   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.605787   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:38.605820   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:37.402128   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.902663   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.574935   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.575447   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:40.243971   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.743768   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:41.159454   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:41.172776   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:41.172845   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:41.205430   77396 cri.go:89] found id: ""
	I0828 18:24:41.205459   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.205470   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:41.205477   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:41.205541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:41.238941   77396 cri.go:89] found id: ""
	I0828 18:24:41.238968   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.238978   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:41.238985   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:41.239047   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:41.276056   77396 cri.go:89] found id: ""
	I0828 18:24:41.276079   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.276086   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:41.276092   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:41.276140   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:41.309018   77396 cri.go:89] found id: ""
	I0828 18:24:41.309043   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.309051   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:41.309057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:41.309103   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:41.343279   77396 cri.go:89] found id: ""
	I0828 18:24:41.343301   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.343309   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:41.343314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:41.343360   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:41.376723   77396 cri.go:89] found id: ""
	I0828 18:24:41.376749   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.376756   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:41.376762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:41.376811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:41.411996   77396 cri.go:89] found id: ""
	I0828 18:24:41.412023   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.412034   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:41.412040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:41.412091   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:41.445988   77396 cri.go:89] found id: ""
	I0828 18:24:41.446016   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.446026   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:41.446037   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:41.446053   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:41.498760   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:41.498799   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:41.512383   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:41.512413   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:41.582469   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:41.582493   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:41.582506   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:41.658801   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:41.658836   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.195154   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:44.207904   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:44.207978   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:44.241620   77396 cri.go:89] found id: ""
	I0828 18:24:44.241649   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.241659   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:44.241667   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:44.241726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:44.277206   77396 cri.go:89] found id: ""
	I0828 18:24:44.277238   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.277248   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:44.277254   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:44.277313   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:44.314367   77396 cri.go:89] found id: ""
	I0828 18:24:44.314397   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.314407   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:44.314415   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:44.314473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:44.356384   77396 cri.go:89] found id: ""
	I0828 18:24:44.356417   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.356429   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:44.356436   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:44.356499   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:44.388781   77396 cri.go:89] found id: ""
	I0828 18:24:44.388804   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.388812   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:44.388818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:44.388864   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:44.422896   77396 cri.go:89] found id: ""
	I0828 18:24:44.422927   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.422939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:44.422946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:44.423000   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:44.457218   77396 cri.go:89] found id: ""
	I0828 18:24:44.457242   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.457250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:44.457256   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:44.457315   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:44.489819   77396 cri.go:89] found id: ""
	I0828 18:24:44.489846   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.489854   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:44.489874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:44.489886   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.526759   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:44.526789   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:44.578813   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:44.578844   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:44.592066   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:44.592105   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:44.655504   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:44.655528   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:44.655547   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:42.401964   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.901869   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.076081   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.576010   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:45.242907   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.244400   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.240915   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:47.253259   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:47.253324   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:47.287911   77396 cri.go:89] found id: ""
	I0828 18:24:47.287939   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.287950   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:47.287958   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:47.288017   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:47.319834   77396 cri.go:89] found id: ""
	I0828 18:24:47.319863   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.319871   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:47.319877   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:47.319947   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:47.356339   77396 cri.go:89] found id: ""
	I0828 18:24:47.356370   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.356395   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:47.356403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:47.356481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:47.388621   77396 cri.go:89] found id: ""
	I0828 18:24:47.388646   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.388656   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:47.388663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:47.388713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:47.422495   77396 cri.go:89] found id: ""
	I0828 18:24:47.422527   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.422537   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:47.422545   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:47.422614   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:47.458799   77396 cri.go:89] found id: ""
	I0828 18:24:47.458825   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.458833   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:47.458839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:47.458885   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:47.496184   77396 cri.go:89] found id: ""
	I0828 18:24:47.496215   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.496226   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:47.496233   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:47.496286   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:47.536283   77396 cri.go:89] found id: ""
	I0828 18:24:47.536311   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.536322   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:47.536333   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:47.536347   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:47.588024   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:47.588056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:47.600661   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:47.600727   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:47.669096   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:47.669124   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:47.669139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:47.753696   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:47.753725   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:46.902404   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.402357   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:46.576078   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.075244   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.744421   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:52.243878   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:50.293600   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:50.306623   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:50.306715   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:50.340416   77396 cri.go:89] found id: ""
	I0828 18:24:50.340448   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.340460   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:50.340468   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:50.340534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:50.375812   77396 cri.go:89] found id: ""
	I0828 18:24:50.375843   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.375854   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:50.375861   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:50.375924   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:50.414399   77396 cri.go:89] found id: ""
	I0828 18:24:50.414426   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.414435   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:50.414444   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:50.414512   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:50.451285   77396 cri.go:89] found id: ""
	I0828 18:24:50.451316   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.451328   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:50.451336   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:50.451404   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:50.487828   77396 cri.go:89] found id: ""
	I0828 18:24:50.487852   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.487863   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:50.487871   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:50.487929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:50.520989   77396 cri.go:89] found id: ""
	I0828 18:24:50.521015   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.521023   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:50.521028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:50.521086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:50.553231   77396 cri.go:89] found id: ""
	I0828 18:24:50.553262   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.553271   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:50.553277   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:50.553332   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:50.588612   77396 cri.go:89] found id: ""
	I0828 18:24:50.588644   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.588654   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:50.588663   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:50.588674   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:50.642018   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:50.642065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:50.655887   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:50.655918   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:50.721935   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:50.721964   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:50.721980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:50.802009   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:50.802049   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:53.344650   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:53.357952   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:53.358011   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:53.393369   77396 cri.go:89] found id: ""
	I0828 18:24:53.393399   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.393408   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:53.393413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:53.393475   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:53.425918   77396 cri.go:89] found id: ""
	I0828 18:24:53.425947   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.425958   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:53.425965   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:53.426018   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:53.461827   77396 cri.go:89] found id: ""
	I0828 18:24:53.461857   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.461867   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:53.461874   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:53.461966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:53.494323   77396 cri.go:89] found id: ""
	I0828 18:24:53.494353   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.494363   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:53.494370   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:53.494430   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:53.531687   77396 cri.go:89] found id: ""
	I0828 18:24:53.531715   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.531726   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:53.531733   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:53.531789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:53.565794   77396 cri.go:89] found id: ""
	I0828 18:24:53.565819   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.565829   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:53.565838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:53.565894   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:53.601666   77396 cri.go:89] found id: ""
	I0828 18:24:53.601699   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.601710   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:53.601717   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:53.601782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:53.641268   77396 cri.go:89] found id: ""
	I0828 18:24:53.641302   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.641315   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:53.641332   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:53.641363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:53.695496   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:53.695532   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:53.708691   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:53.708722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:53.779280   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:53.779307   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:53.779320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:53.859258   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:53.859295   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:51.402746   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.403126   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:51.575165   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.575930   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:55.576188   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:54.243984   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.743976   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.403005   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:56.416305   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:56.416376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:56.448916   77396 cri.go:89] found id: ""
	I0828 18:24:56.448944   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.448955   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:56.448962   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:56.449022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:56.483870   77396 cri.go:89] found id: ""
	I0828 18:24:56.483897   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.483905   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:56.483910   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:56.483970   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:56.516615   77396 cri.go:89] found id: ""
	I0828 18:24:56.516642   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.516649   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:56.516655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:56.516712   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:56.551561   77396 cri.go:89] found id: ""
	I0828 18:24:56.551584   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.551591   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:56.551599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:56.551668   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:56.586089   77396 cri.go:89] found id: ""
	I0828 18:24:56.586120   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.586130   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:56.586138   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:56.586197   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:56.617988   77396 cri.go:89] found id: ""
	I0828 18:24:56.618018   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.618028   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:56.618034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:56.618111   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:56.664493   77396 cri.go:89] found id: ""
	I0828 18:24:56.664526   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.664535   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:56.664540   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:56.664601   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:56.698191   77396 cri.go:89] found id: ""
	I0828 18:24:56.698217   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.698228   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:56.698237   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:56.698251   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:56.747197   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:56.747225   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:56.760236   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:56.760262   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:56.831931   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:56.831955   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:56.831969   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:56.908578   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:56.908621   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:59.450148   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:59.464476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:59.464548   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:59.500934   77396 cri.go:89] found id: ""
	I0828 18:24:59.500956   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.500965   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:59.500970   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:59.501019   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:59.532711   77396 cri.go:89] found id: ""
	I0828 18:24:59.532740   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.532747   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:59.532753   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:59.532802   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:59.564974   77396 cri.go:89] found id: ""
	I0828 18:24:59.565001   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.565009   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:59.565016   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:59.565073   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:59.597924   77396 cri.go:89] found id: ""
	I0828 18:24:59.597957   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.597967   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:59.597975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:59.598030   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:59.630179   77396 cri.go:89] found id: ""
	I0828 18:24:59.630207   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.630216   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:59.630222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:59.630279   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:59.664755   77396 cri.go:89] found id: ""
	I0828 18:24:59.664783   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.664793   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:59.664800   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:59.664860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:59.701556   77396 cri.go:89] found id: ""
	I0828 18:24:59.701581   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.701590   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:59.701596   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:59.701646   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:59.733387   77396 cri.go:89] found id: ""
	I0828 18:24:59.733422   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.733430   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:59.733439   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:59.733450   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:59.780962   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:59.780994   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:59.795998   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:59.796034   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:59.864864   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:59.864886   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:59.864902   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:59.941914   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:59.941957   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:55.901611   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:57.902218   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.902364   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:58.076387   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:00.575268   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.243885   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:01.742980   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.480133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:02.492804   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:02.492863   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:02.525573   77396 cri.go:89] found id: ""
	I0828 18:25:02.525600   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.525609   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:02.525614   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:02.525675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:02.558640   77396 cri.go:89] found id: ""
	I0828 18:25:02.558670   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.558680   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:02.558687   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:02.558746   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:02.598803   77396 cri.go:89] found id: ""
	I0828 18:25:02.598838   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.598851   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:02.598860   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:02.598931   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:02.634067   77396 cri.go:89] found id: ""
	I0828 18:25:02.634110   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.634121   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:02.634128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:02.634188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:02.671495   77396 cri.go:89] found id: ""
	I0828 18:25:02.671520   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.671529   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:02.671536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:02.671595   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:02.704478   77396 cri.go:89] found id: ""
	I0828 18:25:02.704510   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.704522   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:02.704530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:02.704591   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:02.736799   77396 cri.go:89] found id: ""
	I0828 18:25:02.736831   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.736840   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:02.736846   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:02.736905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:02.770820   77396 cri.go:89] found id: ""
	I0828 18:25:02.770846   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.770856   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:02.770866   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:02.770885   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:02.848618   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:02.848645   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:02.848662   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:02.924704   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:02.924738   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:02.960776   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:02.960811   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:03.011600   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:03.011645   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:02.402547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:04.903615   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.576294   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.075828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:03.743629   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.744476   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:08.243316   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.527662   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:05.540652   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:05.540737   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:05.574620   77396 cri.go:89] found id: ""
	I0828 18:25:05.574650   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.574660   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:05.574668   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:05.574729   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:05.607594   77396 cri.go:89] found id: ""
	I0828 18:25:05.607621   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.607629   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:05.607634   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:05.607691   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:05.650792   77396 cri.go:89] found id: ""
	I0828 18:25:05.650823   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.650833   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:05.650841   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:05.650909   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:05.684453   77396 cri.go:89] found id: ""
	I0828 18:25:05.684481   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.684492   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:05.684499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:05.684564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:05.717875   77396 cri.go:89] found id: ""
	I0828 18:25:05.717904   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.717914   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:05.717921   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:05.717980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:05.754114   77396 cri.go:89] found id: ""
	I0828 18:25:05.754143   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.754155   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:05.754163   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:05.754220   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:05.786354   77396 cri.go:89] found id: ""
	I0828 18:25:05.786399   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.786411   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:05.786418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:05.786473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:05.818108   77396 cri.go:89] found id: ""
	I0828 18:25:05.818134   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.818141   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:05.818149   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:05.818164   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:05.868731   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:05.868762   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:05.882333   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:05.882360   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:05.951978   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:05.952003   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:05.952015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:06.028537   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:06.028573   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:08.567011   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:08.580607   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:08.580675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:08.613821   77396 cri.go:89] found id: ""
	I0828 18:25:08.613847   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.613858   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:08.613865   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:08.613929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:08.648994   77396 cri.go:89] found id: ""
	I0828 18:25:08.649021   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.649030   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:08.649036   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:08.649084   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:08.680804   77396 cri.go:89] found id: ""
	I0828 18:25:08.680829   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.680837   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:08.680844   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:08.680903   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:08.717926   77396 cri.go:89] found id: ""
	I0828 18:25:08.717962   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.717973   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:08.717980   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:08.718043   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:08.751928   77396 cri.go:89] found id: ""
	I0828 18:25:08.751957   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.751967   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:08.751975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:08.752037   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:08.791400   77396 cri.go:89] found id: ""
	I0828 18:25:08.791423   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.791432   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:08.791437   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:08.791497   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:08.828072   77396 cri.go:89] found id: ""
	I0828 18:25:08.828106   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.828118   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:08.828125   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:08.828190   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:08.881175   77396 cri.go:89] found id: ""
	I0828 18:25:08.881204   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.881216   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:08.881226   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:08.881241   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:08.970432   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:08.970469   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:09.006975   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:09.007002   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:09.059881   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:09.059919   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:09.073543   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:09.073567   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:09.143468   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:07.403012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.901414   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:07.075904   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.077674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:10.244567   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:12.742811   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.644356   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:11.657229   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:11.657297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:11.695036   77396 cri.go:89] found id: ""
	I0828 18:25:11.695059   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.695067   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:11.695073   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:11.695123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:11.726524   77396 cri.go:89] found id: ""
	I0828 18:25:11.726548   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.726556   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:11.726561   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:11.726608   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:11.759249   77396 cri.go:89] found id: ""
	I0828 18:25:11.759278   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.759289   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:11.759296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:11.759356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:11.794109   77396 cri.go:89] found id: ""
	I0828 18:25:11.794154   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.794163   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:11.794169   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:11.794221   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:11.828378   77396 cri.go:89] found id: ""
	I0828 18:25:11.828403   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.828411   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:11.828416   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:11.828470   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:11.864009   77396 cri.go:89] found id: ""
	I0828 18:25:11.864035   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.864043   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:11.864049   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:11.864108   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:11.895844   77396 cri.go:89] found id: ""
	I0828 18:25:11.895870   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.895878   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:11.895883   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:11.895932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:11.932149   77396 cri.go:89] found id: ""
	I0828 18:25:11.932180   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.932190   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:11.932208   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:11.932222   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:11.982478   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:11.982514   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:11.995466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:11.995498   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:12.058507   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:12.058531   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:12.058546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:12.138225   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:12.138260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:14.675970   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:14.688744   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:14.688811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:14.720771   77396 cri.go:89] found id: ""
	I0828 18:25:14.720795   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.720803   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:14.720808   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:14.720855   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:14.754047   77396 cri.go:89] found id: ""
	I0828 18:25:14.754071   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.754095   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:14.754103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:14.754159   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:14.789214   77396 cri.go:89] found id: ""
	I0828 18:25:14.789244   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.789256   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:14.789263   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:14.789331   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:14.822366   77396 cri.go:89] found id: ""
	I0828 18:25:14.822399   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.822411   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:14.822419   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:14.822489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:14.855905   77396 cri.go:89] found id: ""
	I0828 18:25:14.855932   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.855942   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:14.855949   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:14.856007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:14.889492   77396 cri.go:89] found id: ""
	I0828 18:25:14.889519   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.889529   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:14.889536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:14.889594   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:14.923892   77396 cri.go:89] found id: ""
	I0828 18:25:14.923921   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.923932   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:14.923940   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:14.923998   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:14.954979   77396 cri.go:89] found id: ""
	I0828 18:25:14.955002   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.955009   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:14.955017   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:14.955029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:15.006233   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:15.006266   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:15.019702   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:15.019729   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:15.090916   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:15.090943   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:15.090959   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:15.166150   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:15.166190   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:11.902996   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.402539   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.574819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:13.575405   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:16.074386   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.743486   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.243491   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.703473   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:17.716353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:17.716440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:17.750334   77396 cri.go:89] found id: ""
	I0828 18:25:17.750367   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.750376   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:17.750382   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:17.750440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:17.783429   77396 cri.go:89] found id: ""
	I0828 18:25:17.783475   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.783488   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:17.783496   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:17.783561   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:17.819014   77396 cri.go:89] found id: ""
	I0828 18:25:17.819041   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.819052   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:17.819060   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:17.819118   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:17.856138   77396 cri.go:89] found id: ""
	I0828 18:25:17.856168   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.856179   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:17.856186   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:17.856248   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:17.891579   77396 cri.go:89] found id: ""
	I0828 18:25:17.891611   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.891619   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:17.891626   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:17.891687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:17.924709   77396 cri.go:89] found id: ""
	I0828 18:25:17.924771   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.924798   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:17.924808   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:17.924874   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:17.955875   77396 cri.go:89] found id: ""
	I0828 18:25:17.955903   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.955913   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:17.955920   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:17.955977   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:17.993827   77396 cri.go:89] found id: ""
	I0828 18:25:17.993861   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.993872   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:17.993882   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:17.993897   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:18.046501   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:18.046534   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:18.060008   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:18.060040   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:18.128546   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:18.128567   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:18.128582   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:18.204859   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:18.204896   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:16.901986   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.902594   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.076564   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.575785   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:19.243545   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:21.244384   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.745360   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:20.759428   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:20.759511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:20.794748   77396 cri.go:89] found id: ""
	I0828 18:25:20.794780   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.794789   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:20.794794   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:20.794843   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:20.834595   77396 cri.go:89] found id: ""
	I0828 18:25:20.834623   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.834636   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:20.834642   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:20.834720   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:20.870609   77396 cri.go:89] found id: ""
	I0828 18:25:20.870636   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.870646   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:20.870653   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:20.870710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:20.903739   77396 cri.go:89] found id: ""
	I0828 18:25:20.903764   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.903774   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:20.903782   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:20.903841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:20.937331   77396 cri.go:89] found id: ""
	I0828 18:25:20.937360   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.937367   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:20.937373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:20.937424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:20.971140   77396 cri.go:89] found id: ""
	I0828 18:25:20.971169   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.971178   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:20.971184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:20.971231   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:21.002714   77396 cri.go:89] found id: ""
	I0828 18:25:21.002743   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.002753   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:21.002761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:21.002833   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:21.034802   77396 cri.go:89] found id: ""
	I0828 18:25:21.034827   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.034837   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:21.034848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:21.034862   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:21.091088   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:21.091128   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:21.103535   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:21.103569   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:21.177175   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:21.177202   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:21.177217   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:21.257125   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:21.257161   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:23.797074   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:23.810097   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:23.810171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:23.843943   77396 cri.go:89] found id: ""
	I0828 18:25:23.843972   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.843984   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:23.843991   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:23.844054   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:23.879872   77396 cri.go:89] found id: ""
	I0828 18:25:23.879906   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.879918   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:23.879926   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:23.879985   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:23.914109   77396 cri.go:89] found id: ""
	I0828 18:25:23.914136   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.914145   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:23.914153   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:23.914200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:23.952672   77396 cri.go:89] found id: ""
	I0828 18:25:23.952700   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.952708   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:23.952714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:23.952759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:23.986813   77396 cri.go:89] found id: ""
	I0828 18:25:23.986839   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.986855   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:23.986861   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:23.986917   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:24.019358   77396 cri.go:89] found id: ""
	I0828 18:25:24.019387   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.019396   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:24.019413   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:24.019487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:24.053389   77396 cri.go:89] found id: ""
	I0828 18:25:24.053415   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.053423   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:24.053429   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:24.053477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:24.086618   77396 cri.go:89] found id: ""
	I0828 18:25:24.086652   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.086660   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:24.086667   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:24.086677   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:24.136243   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:24.136277   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:24.150031   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:24.150071   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:24.229689   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:24.229729   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:24.229746   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:24.307152   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:24.307197   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:20.902691   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.401748   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:22.575828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.075159   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.743296   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.743656   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.243947   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:26.844828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:26.858915   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:26.858989   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:26.896094   77396 cri.go:89] found id: ""
	I0828 18:25:26.896123   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.896132   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:26.896138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:26.896187   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:26.934896   77396 cri.go:89] found id: ""
	I0828 18:25:26.934925   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.934936   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:26.934944   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:26.935007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:26.967673   77396 cri.go:89] found id: ""
	I0828 18:25:26.967700   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.967708   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:26.967714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:26.967780   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:27.000095   77396 cri.go:89] found id: ""
	I0828 18:25:27.000124   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.000133   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:27.000140   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:27.000192   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:27.038158   77396 cri.go:89] found id: ""
	I0828 18:25:27.038186   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.038195   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:27.038201   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:27.038253   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:27.073606   77396 cri.go:89] found id: ""
	I0828 18:25:27.073634   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.073649   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:27.073657   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:27.073713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:27.105139   77396 cri.go:89] found id: ""
	I0828 18:25:27.105163   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.105176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:27.105182   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:27.105235   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:27.137985   77396 cri.go:89] found id: ""
	I0828 18:25:27.138014   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.138025   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:27.138036   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:27.138055   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:27.187983   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:27.188018   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:27.200260   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:27.200286   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:27.273005   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:27.273026   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:27.273038   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:27.353333   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:27.353375   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:29.890515   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:29.903924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:29.903994   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:29.936189   77396 cri.go:89] found id: ""
	I0828 18:25:29.936221   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.936231   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:29.936240   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:29.936354   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:29.968319   77396 cri.go:89] found id: ""
	I0828 18:25:29.968349   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.968359   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:29.968366   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:29.968436   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:30.001331   77396 cri.go:89] found id: ""
	I0828 18:25:30.001358   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.001383   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:30.001391   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:30.001477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:30.035610   77396 cri.go:89] found id: ""
	I0828 18:25:30.035634   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.035642   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:30.035648   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:30.035695   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:30.067304   77396 cri.go:89] found id: ""
	I0828 18:25:30.067335   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.067346   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:30.067354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:30.067429   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:30.105020   77396 cri.go:89] found id: ""
	I0828 18:25:30.105049   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.105057   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:30.105063   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:30.105126   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:30.142048   77396 cri.go:89] found id: ""
	I0828 18:25:30.142097   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.142110   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:30.142117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:30.142180   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:30.173099   77396 cri.go:89] found id: ""
	I0828 18:25:30.173131   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.173140   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:30.173149   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:30.173166   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:25:25.901875   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.401339   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.402248   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:27.076181   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:29.575216   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.743526   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:33.242940   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:25:30.238946   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:30.238968   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:30.238980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:30.320484   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:30.320523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:30.360028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:30.360056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:30.412663   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:30.412697   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:32.927100   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:32.940555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:32.940636   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:32.973182   77396 cri.go:89] found id: ""
	I0828 18:25:32.973221   77396 logs.go:276] 0 containers: []
	W0828 18:25:32.973233   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:32.973242   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:32.973303   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:33.006096   77396 cri.go:89] found id: ""
	I0828 18:25:33.006125   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.006134   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:33.006139   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:33.006191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:33.038430   77396 cri.go:89] found id: ""
	I0828 18:25:33.038461   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.038472   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:33.038480   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:33.038542   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:33.070266   77396 cri.go:89] found id: ""
	I0828 18:25:33.070294   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.070303   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:33.070315   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:33.070375   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:33.105248   77396 cri.go:89] found id: ""
	I0828 18:25:33.105278   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.105289   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:33.105296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:33.105356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:33.136507   77396 cri.go:89] found id: ""
	I0828 18:25:33.136540   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.136551   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:33.136559   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:33.136618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:33.167333   77396 cri.go:89] found id: ""
	I0828 18:25:33.167359   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.167370   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:33.167377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:33.167442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:33.201302   77396 cri.go:89] found id: ""
	I0828 18:25:33.201331   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.201343   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:33.201352   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:33.201364   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:33.213335   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:33.213361   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:33.278269   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:33.278296   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:33.278310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:33.357015   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:33.357048   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:33.401463   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:33.401495   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:32.402583   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.402749   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:32.075671   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.575951   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.743215   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.243081   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.952911   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:35.965925   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:35.965990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:36.001656   77396 cri.go:89] found id: ""
	I0828 18:25:36.001693   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.001705   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:36.001713   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:36.001784   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:36.035010   77396 cri.go:89] found id: ""
	I0828 18:25:36.035037   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.035045   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:36.035050   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:36.035099   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:36.069113   77396 cri.go:89] found id: ""
	I0828 18:25:36.069148   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.069158   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:36.069164   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:36.069219   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:36.106200   77396 cri.go:89] found id: ""
	I0828 18:25:36.106230   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.106240   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:36.106248   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:36.106316   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:36.138428   77396 cri.go:89] found id: ""
	I0828 18:25:36.138457   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.138468   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:36.138475   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:36.138559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:36.170084   77396 cri.go:89] found id: ""
	I0828 18:25:36.170112   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.170122   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:36.170128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:36.170188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:36.202180   77396 cri.go:89] found id: ""
	I0828 18:25:36.202205   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.202215   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:36.202222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:36.202285   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:36.236125   77396 cri.go:89] found id: ""
	I0828 18:25:36.236156   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.236167   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:36.236179   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:36.236193   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:36.274230   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:36.274256   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:36.325505   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:36.325546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:36.338714   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:36.338741   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:36.406404   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:36.406432   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:36.406448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:38.981942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:38.995287   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:38.995357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:39.028250   77396 cri.go:89] found id: ""
	I0828 18:25:39.028275   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.028282   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:39.028289   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:39.028335   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:39.061402   77396 cri.go:89] found id: ""
	I0828 18:25:39.061434   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.061444   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:39.061449   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:39.061501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:39.095672   77396 cri.go:89] found id: ""
	I0828 18:25:39.095704   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.095716   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:39.095729   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:39.095789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:39.130135   77396 cri.go:89] found id: ""
	I0828 18:25:39.130162   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.130170   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:39.130176   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:39.130239   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:39.168529   77396 cri.go:89] found id: ""
	I0828 18:25:39.168560   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.168571   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:39.168578   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:39.168641   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:39.200786   77396 cri.go:89] found id: ""
	I0828 18:25:39.200813   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.200821   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:39.200828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:39.200876   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:39.232855   77396 cri.go:89] found id: ""
	I0828 18:25:39.232886   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.232894   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:39.232902   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:39.232966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:39.267241   77396 cri.go:89] found id: ""
	I0828 18:25:39.267273   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.267284   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:39.267294   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:39.267309   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:39.306023   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:39.306061   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:39.357880   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:39.357931   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:39.370886   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:39.370914   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:39.448130   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:39.448151   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:39.448163   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:36.403245   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.902238   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:37.075570   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:39.076792   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:40.243633   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.244395   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.027111   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:42.039611   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:42.039687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:42.078052   77396 cri.go:89] found id: ""
	I0828 18:25:42.078093   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.078104   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:42.078111   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:42.078169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:42.112812   77396 cri.go:89] found id: ""
	I0828 18:25:42.112842   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.112851   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:42.112856   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:42.112902   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:42.146846   77396 cri.go:89] found id: ""
	I0828 18:25:42.146875   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.146884   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:42.146891   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:42.146948   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:42.179311   77396 cri.go:89] found id: ""
	I0828 18:25:42.179344   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.179352   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:42.179358   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:42.179422   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:42.212149   77396 cri.go:89] found id: ""
	I0828 18:25:42.212179   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.212192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:42.212200   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:42.212254   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:42.248322   77396 cri.go:89] found id: ""
	I0828 18:25:42.248358   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.248369   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:42.248382   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:42.248496   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:42.283212   77396 cri.go:89] found id: ""
	I0828 18:25:42.283241   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.283250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:42.283257   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:42.283318   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:42.327064   77396 cri.go:89] found id: ""
	I0828 18:25:42.327099   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.327110   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:42.327121   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:42.327135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:42.378545   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:42.378577   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:42.392020   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:42.392045   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:42.464531   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:42.464553   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:42.464564   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:42.543116   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:42.543162   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:45.083935   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:45.096434   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:45.096501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:45.130059   77396 cri.go:89] found id: ""
	I0828 18:25:45.130098   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.130110   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:45.130117   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:45.130176   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:45.160982   77396 cri.go:89] found id: ""
	I0828 18:25:45.161011   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.161021   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:45.161028   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:45.161086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:45.191416   77396 cri.go:89] found id: ""
	I0828 18:25:45.191449   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.191460   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:45.191467   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:45.191524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:41.401456   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:43.401666   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.401772   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:41.575819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.075020   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.743053   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:47.242714   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.223315   77396 cri.go:89] found id: ""
	I0828 18:25:45.223344   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.223360   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:45.223368   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:45.223421   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:45.255404   77396 cri.go:89] found id: ""
	I0828 18:25:45.255428   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.255435   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:45.255441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:45.255487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:45.294671   77396 cri.go:89] found id: ""
	I0828 18:25:45.294705   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.294716   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:45.294724   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:45.294811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:45.329148   77396 cri.go:89] found id: ""
	I0828 18:25:45.329174   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.329186   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:45.329191   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:45.329249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:45.361976   77396 cri.go:89] found id: ""
	I0828 18:25:45.362007   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.362018   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:45.362028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:45.362041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:45.412495   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:45.412530   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:45.425268   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:45.425302   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:45.493451   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:45.493475   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:45.493489   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:45.571427   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:45.571472   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.108133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:48.120632   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:48.120699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:48.156933   77396 cri.go:89] found id: ""
	I0828 18:25:48.156963   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.156973   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:48.156981   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:48.157045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:48.188436   77396 cri.go:89] found id: ""
	I0828 18:25:48.188465   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.188473   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:48.188479   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:48.188524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:48.219558   77396 cri.go:89] found id: ""
	I0828 18:25:48.219588   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.219598   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:48.219605   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:48.219661   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:48.252872   77396 cri.go:89] found id: ""
	I0828 18:25:48.252901   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.252917   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:48.252923   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:48.252975   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:48.288244   77396 cri.go:89] found id: ""
	I0828 18:25:48.288273   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.288283   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:48.288291   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:48.288355   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:48.325077   77396 cri.go:89] found id: ""
	I0828 18:25:48.325114   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.325126   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:48.325134   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:48.325195   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:48.358163   77396 cri.go:89] found id: ""
	I0828 18:25:48.358191   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.358202   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:48.358210   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:48.358259   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:48.409246   77396 cri.go:89] found id: ""
	I0828 18:25:48.409277   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.409287   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:48.409299   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:48.409314   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:48.425228   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:48.425259   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:48.493169   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:48.493188   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:48.493201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:48.573486   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:48.573524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.615846   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:48.615879   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:47.901530   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.901707   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:46.574662   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:48.575614   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.075530   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.244444   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.744518   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.165546   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:51.178743   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:51.178807   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:51.214299   77396 cri.go:89] found id: ""
	I0828 18:25:51.214329   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.214340   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:51.214349   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:51.214426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:51.247057   77396 cri.go:89] found id: ""
	I0828 18:25:51.247086   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.247096   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:51.247103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:51.247174   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:51.279381   77396 cri.go:89] found id: ""
	I0828 18:25:51.279413   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.279423   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:51.279430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:51.279492   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:51.314237   77396 cri.go:89] found id: ""
	I0828 18:25:51.314266   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.314277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:51.314286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:51.314352   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:51.347496   77396 cri.go:89] found id: ""
	I0828 18:25:51.347518   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.347526   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:51.347532   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:51.347578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:51.381705   77396 cri.go:89] found id: ""
	I0828 18:25:51.381742   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.381753   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:51.381762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:51.381816   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:51.413157   77396 cri.go:89] found id: ""
	I0828 18:25:51.413186   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.413196   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:51.413203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:51.413261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:51.443228   77396 cri.go:89] found id: ""
	I0828 18:25:51.443251   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.443266   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:51.443274   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:51.443287   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:51.490927   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:51.490961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:51.505308   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:51.505334   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:51.572077   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:51.572109   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:51.572125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:51.658398   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:51.658441   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:54.199638   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:54.213449   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:54.213525   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:54.249698   77396 cri.go:89] found id: ""
	I0828 18:25:54.249720   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.249727   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:54.249733   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:54.249782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:54.285235   77396 cri.go:89] found id: ""
	I0828 18:25:54.285267   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.285279   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:54.285287   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:54.285344   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:54.322535   77396 cri.go:89] found id: ""
	I0828 18:25:54.322562   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.322571   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:54.322577   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:54.322640   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:54.357995   77396 cri.go:89] found id: ""
	I0828 18:25:54.358025   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.358036   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:54.358045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:54.358129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:54.391112   77396 cri.go:89] found id: ""
	I0828 18:25:54.391137   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.391145   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:54.391150   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:54.391213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:54.424248   77396 cri.go:89] found id: ""
	I0828 18:25:54.424278   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.424288   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:54.424295   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:54.424357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:54.456529   77396 cri.go:89] found id: ""
	I0828 18:25:54.456553   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.456561   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:54.456566   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:54.456619   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:54.489226   77396 cri.go:89] found id: ""
	I0828 18:25:54.489251   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.489259   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:54.489268   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:54.489283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:54.544282   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:54.544318   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:54.557511   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:54.557549   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:54.631057   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:54.631081   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:54.631096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:54.711874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:54.711910   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:51.902237   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.402216   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:53.076058   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:55.577768   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.244062   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:56.244857   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:57.251826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:57.264806   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:57.264872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:57.300005   77396 cri.go:89] found id: ""
	I0828 18:25:57.300031   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.300041   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:57.300049   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:57.300128   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:57.333070   77396 cri.go:89] found id: ""
	I0828 18:25:57.333099   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.333110   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:57.333117   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:57.333181   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:57.369343   77396 cri.go:89] found id: ""
	I0828 18:25:57.369372   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.369390   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:57.369398   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:57.369462   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:57.401729   77396 cri.go:89] found id: ""
	I0828 18:25:57.401756   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.401764   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:57.401770   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:57.401824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:57.432890   77396 cri.go:89] found id: ""
	I0828 18:25:57.432914   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.432921   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:57.432927   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:57.432973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:57.467572   77396 cri.go:89] found id: ""
	I0828 18:25:57.467596   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.467604   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:57.467609   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:57.467663   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:57.500316   77396 cri.go:89] found id: ""
	I0828 18:25:57.500344   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.500351   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:57.500357   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:57.500411   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:57.531676   77396 cri.go:89] found id: ""
	I0828 18:25:57.531700   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.531708   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:57.531716   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:57.531728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:57.604613   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:57.604639   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:57.604653   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:57.684622   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:57.684658   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:57.720566   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:57.720656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:57.770832   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:57.770866   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:56.902012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:59.402189   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.075045   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.575328   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.743586   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.743675   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:01.737703   76435 pod_ready.go:82] duration metric: took 4m0.000480749s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:01.737748   76435 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0828 18:26:01.737772   76435 pod_ready.go:39] duration metric: took 4m13.763880094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:01.737804   76435 kubeadm.go:597] duration metric: took 4m22.607627094s to restartPrimaryControlPlane
	W0828 18:26:01.737875   76435 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:01.737908   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:00.283493   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:00.296500   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:00.296578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:00.334395   77396 cri.go:89] found id: ""
	I0828 18:26:00.334420   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.334428   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:00.334434   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:00.334481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:00.369178   77396 cri.go:89] found id: ""
	I0828 18:26:00.369205   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.369214   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:00.369219   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:00.369283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:00.405962   77396 cri.go:89] found id: ""
	I0828 18:26:00.405990   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.406000   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:00.406007   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:00.406064   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:00.438684   77396 cri.go:89] found id: ""
	I0828 18:26:00.438717   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.438728   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:00.438735   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:00.438795   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:00.472357   77396 cri.go:89] found id: ""
	I0828 18:26:00.472385   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.472397   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:00.472403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:00.472450   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:00.506891   77396 cri.go:89] found id: ""
	I0828 18:26:00.506920   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.506931   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:00.506938   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:00.506999   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:00.546387   77396 cri.go:89] found id: ""
	I0828 18:26:00.546413   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.546422   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:00.546427   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:00.546474   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:00.598714   77396 cri.go:89] found id: ""
	I0828 18:26:00.598745   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.598753   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:00.598761   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:00.598779   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:00.617100   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:00.617130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:00.687317   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:00.687348   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:00.687363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:00.770097   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:00.770130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:00.815848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:00.815883   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:03.365469   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:03.379117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:03.379182   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:03.414122   77396 cri.go:89] found id: ""
	I0828 18:26:03.414148   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.414155   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:03.414161   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:03.414208   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:03.446953   77396 cri.go:89] found id: ""
	I0828 18:26:03.446975   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.446983   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:03.446988   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:03.447036   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:03.481034   77396 cri.go:89] found id: ""
	I0828 18:26:03.481059   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.481067   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:03.481072   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:03.481120   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:03.514785   77396 cri.go:89] found id: ""
	I0828 18:26:03.514814   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.514824   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:03.514832   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:03.514888   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:03.548302   77396 cri.go:89] found id: ""
	I0828 18:26:03.548330   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.548340   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:03.548348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:03.548423   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:03.582430   77396 cri.go:89] found id: ""
	I0828 18:26:03.582460   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.582469   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:03.582476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:03.582529   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:03.615108   77396 cri.go:89] found id: ""
	I0828 18:26:03.615136   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.615144   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:03.615149   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:03.615205   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:03.647282   77396 cri.go:89] found id: ""
	I0828 18:26:03.647312   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.647321   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:03.647330   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:03.647340   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:03.660466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:03.660500   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:03.732746   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:03.732767   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:03.732780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:03.811286   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:03.811320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:03.848482   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:03.848513   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:01.402393   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.402670   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.403016   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.075650   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.574825   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:06.400122   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:06.412839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:06.412908   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:06.448570   77396 cri.go:89] found id: ""
	I0828 18:26:06.448597   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.448608   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:06.448620   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:06.448687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:06.482446   77396 cri.go:89] found id: ""
	I0828 18:26:06.482476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.482487   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:06.482495   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:06.482555   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:06.514640   77396 cri.go:89] found id: ""
	I0828 18:26:06.514669   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.514679   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:06.514686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:06.514747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:06.548997   77396 cri.go:89] found id: ""
	I0828 18:26:06.549020   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.549028   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:06.549034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:06.549079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:06.583557   77396 cri.go:89] found id: ""
	I0828 18:26:06.583582   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.583589   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:06.583595   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:06.583665   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:06.617447   77396 cri.go:89] found id: ""
	I0828 18:26:06.617476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.617484   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:06.617490   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:06.617549   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:06.650387   77396 cri.go:89] found id: ""
	I0828 18:26:06.650419   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.650427   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:06.650433   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:06.650489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:06.682851   77396 cri.go:89] found id: ""
	I0828 18:26:06.682879   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.682888   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:06.682899   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:06.682961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:06.695365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:06.695392   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:06.760214   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:06.760245   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:06.760261   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:06.839827   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:06.839863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:06.877298   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:06.877325   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.430694   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:09.443043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:09.443115   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:09.476557   77396 cri.go:89] found id: ""
	I0828 18:26:09.476583   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.476594   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:09.476602   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:09.476659   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:09.514909   77396 cri.go:89] found id: ""
	I0828 18:26:09.514935   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.514943   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:09.514948   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:09.515009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:09.549769   77396 cri.go:89] found id: ""
	I0828 18:26:09.549800   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.549810   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:09.549818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:09.549868   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:09.582793   77396 cri.go:89] found id: ""
	I0828 18:26:09.582821   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.582831   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:09.582838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:09.582896   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:09.615603   77396 cri.go:89] found id: ""
	I0828 18:26:09.615636   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.615648   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:09.615655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:09.615716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:09.650046   77396 cri.go:89] found id: ""
	I0828 18:26:09.650087   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.650100   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:09.650108   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:09.650161   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:09.681726   77396 cri.go:89] found id: ""
	I0828 18:26:09.681754   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.681763   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:09.681768   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:09.681821   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:09.713008   77396 cri.go:89] found id: ""
	I0828 18:26:09.713036   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.713045   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:09.713054   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:09.713065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:09.792720   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:09.792757   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:09.831752   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:09.831785   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.880877   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:09.880913   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:09.896178   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:09.896215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:09.962282   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:07.901074   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:09.905185   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:08.074185   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:10.075331   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.462957   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:12.475266   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:12.475345   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:12.508364   77396 cri.go:89] found id: ""
	I0828 18:26:12.508394   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.508405   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:12.508413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:12.508472   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:12.544152   77396 cri.go:89] found id: ""
	I0828 18:26:12.544185   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.544197   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:12.544204   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:12.544264   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:12.578358   77396 cri.go:89] found id: ""
	I0828 18:26:12.578384   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.578394   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:12.578403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:12.578456   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:12.609183   77396 cri.go:89] found id: ""
	I0828 18:26:12.609206   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.609214   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:12.609219   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:12.609292   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:12.641791   77396 cri.go:89] found id: ""
	I0828 18:26:12.641816   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.641824   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:12.641830   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:12.641887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:12.673857   77396 cri.go:89] found id: ""
	I0828 18:26:12.673881   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.673889   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:12.673894   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:12.673938   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:12.709501   77396 cri.go:89] found id: ""
	I0828 18:26:12.709525   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.709532   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:12.709538   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:12.709585   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:12.742972   77396 cri.go:89] found id: ""
	I0828 18:26:12.742994   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.743002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:12.743010   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:12.743026   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:12.813949   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:12.813969   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:12.813980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:12.894829   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:12.894873   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:12.939533   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:12.939565   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:12.990319   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:12.990358   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:12.404061   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:14.902346   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.575908   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.075489   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.503923   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:15.518161   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:15.518240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:15.564145   77396 cri.go:89] found id: ""
	I0828 18:26:15.564173   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.564182   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:15.564189   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:15.564249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:15.600654   77396 cri.go:89] found id: ""
	I0828 18:26:15.600682   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.600692   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:15.600699   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:15.600760   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:15.633089   77396 cri.go:89] found id: ""
	I0828 18:26:15.633122   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.633131   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:15.633137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:15.633186   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:15.667339   77396 cri.go:89] found id: ""
	I0828 18:26:15.667370   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.667382   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:15.667389   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:15.667451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:15.699463   77396 cri.go:89] found id: ""
	I0828 18:26:15.699499   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.699508   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:15.699513   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:15.699573   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:15.735841   77396 cri.go:89] found id: ""
	I0828 18:26:15.735866   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.735873   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:15.735879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:15.735929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:15.771111   77396 cri.go:89] found id: ""
	I0828 18:26:15.771135   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.771142   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:15.771148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:15.771198   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:15.804845   77396 cri.go:89] found id: ""
	I0828 18:26:15.804868   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.804875   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:15.804884   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:15.804894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:15.856744   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:15.856780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:15.869496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:15.869520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:15.938957   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:15.938982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:15.938998   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:16.016482   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:16.016525   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:18.554851   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:18.568241   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.568317   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.601401   77396 cri.go:89] found id: ""
	I0828 18:26:18.601439   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.601448   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:18.601454   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.601511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.634784   77396 cri.go:89] found id: ""
	I0828 18:26:18.634809   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.634816   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:18.634822   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.634875   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:18.666540   77396 cri.go:89] found id: ""
	I0828 18:26:18.666572   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.666584   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:18.666591   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:18.666643   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:18.699180   77396 cri.go:89] found id: ""
	I0828 18:26:18.699210   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.699221   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:18.699228   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:18.699289   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:18.735001   77396 cri.go:89] found id: ""
	I0828 18:26:18.735032   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.735042   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:18.735050   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:18.735116   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:18.767404   77396 cri.go:89] found id: ""
	I0828 18:26:18.767441   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.767454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:18.767472   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:18.767537   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:18.798857   77396 cri.go:89] found id: ""
	I0828 18:26:18.798881   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.798890   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:18.798896   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:18.798942   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:18.830113   77396 cri.go:89] found id: ""
	I0828 18:26:18.830137   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.830145   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:18.830153   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:18.830165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:18.843161   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:18.843188   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:18.910736   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:18.910760   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:18.910775   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:18.991698   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:18.991734   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.038896   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.038929   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:17.402193   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:18.902692   76486 pod_ready.go:82] duration metric: took 4m0.007006782s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:18.902716   76486 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:26:18.902724   76486 pod_ready.go:39] duration metric: took 4m4.058254547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:18.902739   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:18.902762   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.902819   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.954071   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:18.954115   76486 cri.go:89] found id: ""
	I0828 18:26:18.954123   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:18.954183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.958270   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.958345   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.994068   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:18.994105   76486 cri.go:89] found id: ""
	I0828 18:26:18.994116   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:18.994173   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.998807   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.998881   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:19.050622   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:19.050649   76486 cri.go:89] found id: ""
	I0828 18:26:19.050657   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:19.050738   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.055283   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:19.055340   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:19.093254   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.093280   76486 cri.go:89] found id: ""
	I0828 18:26:19.093288   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:19.093341   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.097062   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:19.097118   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:19.135962   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.135989   76486 cri.go:89] found id: ""
	I0828 18:26:19.135999   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:19.136046   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.140440   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:19.140510   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:19.176913   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.176942   76486 cri.go:89] found id: ""
	I0828 18:26:19.176951   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:19.177007   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.180742   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:19.180794   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:19.218796   76486 cri.go:89] found id: ""
	I0828 18:26:19.218821   76486 logs.go:276] 0 containers: []
	W0828 18:26:19.218832   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:19.218839   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:19.218898   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:19.253110   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:19.253134   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.253140   76486 cri.go:89] found id: ""
	I0828 18:26:19.253148   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:19.253205   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.257338   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.261148   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:19.261173   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.299620   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:19.299659   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.337533   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:19.337560   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:19.836298   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:19.836350   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.881132   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:19.881168   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.921986   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:19.922023   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.975419   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.975455   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:20.045848   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:20.045895   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:20.059683   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:20.059715   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:20.186442   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:20.186472   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:20.233152   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:20.233187   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:20.278546   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:20.278575   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:20.325985   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:20.326015   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:17.075945   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:19.076890   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:21.590663   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:21.602796   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:21.602860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:21.635583   77396 cri.go:89] found id: ""
	I0828 18:26:21.635612   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.635623   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:21.635631   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:21.635699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:21.666982   77396 cri.go:89] found id: ""
	I0828 18:26:21.667023   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.667034   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:21.667041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:21.667098   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:21.698817   77396 cri.go:89] found id: ""
	I0828 18:26:21.698851   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.698862   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:21.698870   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:21.698925   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:21.729618   77396 cri.go:89] found id: ""
	I0828 18:26:21.729645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.729654   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:21.729660   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:21.729718   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:21.763188   77396 cri.go:89] found id: ""
	I0828 18:26:21.763214   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.763222   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:21.763227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:21.763272   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:21.795613   77396 cri.go:89] found id: ""
	I0828 18:26:21.795645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.795656   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:21.795663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:21.795716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:21.828271   77396 cri.go:89] found id: ""
	I0828 18:26:21.828299   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.828308   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:21.828314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:21.828358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:21.860098   77396 cri.go:89] found id: ""
	I0828 18:26:21.860124   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.860132   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:21.860141   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:21.860155   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:21.908269   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:21.908308   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:21.921123   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:21.921149   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:21.985059   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:21.985078   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:21.985091   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:22.065705   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:22.065745   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:24.608061   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:24.621768   77396 kubeadm.go:597] duration metric: took 4m4.233964466s to restartPrimaryControlPlane
	W0828 18:26:24.621838   77396 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:24.621863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:22.860616   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:22.877760   76486 api_server.go:72] duration metric: took 4m15.760769788s to wait for apiserver process to appear ...
	I0828 18:26:22.877790   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:22.877829   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:22.877891   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:22.924739   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:22.924763   76486 cri.go:89] found id: ""
	I0828 18:26:22.924772   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:22.924831   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.928747   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:22.928810   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:22.967171   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:22.967193   76486 cri.go:89] found id: ""
	I0828 18:26:22.967200   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:22.967247   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.970989   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:22.971048   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:23.004804   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.004830   76486 cri.go:89] found id: ""
	I0828 18:26:23.004839   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:23.004895   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.008551   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:23.008616   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:23.041475   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.041496   76486 cri.go:89] found id: ""
	I0828 18:26:23.041504   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:23.041562   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.045265   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:23.045321   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:23.078749   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.078772   76486 cri.go:89] found id: ""
	I0828 18:26:23.078781   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:23.078827   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.082647   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:23.082712   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:23.117104   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.117128   76486 cri.go:89] found id: ""
	I0828 18:26:23.117138   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:23.117196   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.121011   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:23.121066   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:23.154564   76486 cri.go:89] found id: ""
	I0828 18:26:23.154592   76486 logs.go:276] 0 containers: []
	W0828 18:26:23.154614   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:23.154626   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:23.154689   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:23.192082   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.192101   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.192106   76486 cri.go:89] found id: ""
	I0828 18:26:23.192114   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:23.192175   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.196183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.199786   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:23.199814   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:23.241986   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:23.242019   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.276718   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:23.276750   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:23.353187   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:23.353224   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:23.366901   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:23.366937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.403147   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:23.403181   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.440461   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:23.440491   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.476039   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:23.476067   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.524702   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:23.524743   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.558484   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:23.558510   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:23.994897   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:23.994933   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:24.091558   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:24.091591   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:24.133767   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:24.133801   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:21.575113   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:23.576760   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:26.075770   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:27.939212   76435 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.201267084s)
	I0828 18:26:27.939337   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:27.964796   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:27.978456   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:27.988580   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:27.988599   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:27.988640   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.008900   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.008955   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.020342   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.032723   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.032784   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.049205   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.058740   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.058803   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.067969   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.078089   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.078145   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.086950   76435 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.136931   76435 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 18:26:28.137117   76435 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:26:28.249761   76435 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:26:28.249900   76435 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:26:28.250020   76435 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 18:26:28.258994   76435 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:26:28.261527   76435 out.go:235]   - Generating certificates and keys ...
	I0828 18:26:28.261644   76435 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:26:28.261732   76435 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:26:28.261848   76435 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:26:28.261939   76435 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:26:28.262038   76435 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:26:28.262155   76435 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:26:28.262254   76435 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:26:28.262338   76435 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:26:28.262452   76435 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:26:28.262557   76435 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:26:28.262635   76435 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:26:28.262731   76435 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:26:28.434898   76435 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:26:28.833039   76435 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 18:26:28.930840   76435 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:26:29.103123   76435 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:26:29.201561   76435 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:26:29.202039   76435 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:26:29.204545   76435 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:26:28.691092   77396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.069202982s)
	I0828 18:26:28.691158   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:28.705352   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:28.715421   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:28.724698   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:28.724718   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:28.724771   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.733594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.733676   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.742759   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.752127   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.752187   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.761279   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.770451   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.770518   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.779635   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.788337   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.788405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.797794   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.997476   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:26.682052   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:26:26.687081   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:26:26.687992   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:26.688008   76486 api_server.go:131] duration metric: took 3.810212378s to wait for apiserver health ...
	I0828 18:26:26.688016   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:26.688038   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:26.688084   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:26.729049   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:26.729072   76486 cri.go:89] found id: ""
	I0828 18:26:26.729080   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:26.729127   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.733643   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:26.733710   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:26.774655   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:26.774675   76486 cri.go:89] found id: ""
	I0828 18:26:26.774682   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:26.774732   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.778654   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:26.778704   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:26.812844   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:26.812870   76486 cri.go:89] found id: ""
	I0828 18:26:26.812878   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:26.812928   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.816783   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:26.816847   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:26.856925   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:26.856945   76486 cri.go:89] found id: ""
	I0828 18:26:26.856957   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:26.857013   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.860845   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:26.860906   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:26.893850   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:26.893873   76486 cri.go:89] found id: ""
	I0828 18:26:26.893882   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:26.893940   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.897799   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:26.897875   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:26.932914   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:26.932936   76486 cri.go:89] found id: ""
	I0828 18:26:26.932942   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:26.932993   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.937185   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:26.937253   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:26.980339   76486 cri.go:89] found id: ""
	I0828 18:26:26.980368   76486 logs.go:276] 0 containers: []
	W0828 18:26:26.980379   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:26.980386   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:26.980458   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:27.014870   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.014889   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.014893   76486 cri.go:89] found id: ""
	I0828 18:26:27.014899   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:27.014954   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.018782   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.022146   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:27.022167   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:27.062244   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:27.062271   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:27.097495   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:27.097528   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:27.150300   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:27.150342   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.183651   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:27.183680   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.217641   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:27.217666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:27.286627   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:27.286666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:27.300486   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:27.300514   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:27.409150   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:27.409183   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:27.791378   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:27.791425   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:27.842764   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:27.842799   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:27.892361   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:27.892393   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:27.926469   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:27.926497   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:30.478530   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:26:30.478568   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.478576   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.478583   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.478589   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.478595   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.478608   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.478619   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.478627   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.478637   76486 system_pods.go:74] duration metric: took 3.79061533s to wait for pod list to return data ...
	I0828 18:26:30.478648   76486 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:30.482479   76486 default_sa.go:45] found service account: "default"
	I0828 18:26:30.482507   76486 default_sa.go:55] duration metric: took 3.852493ms for default service account to be created ...
	I0828 18:26:30.482517   76486 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:30.488974   76486 system_pods.go:86] 8 kube-system pods found
	I0828 18:26:30.489014   76486 system_pods.go:89] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.489023   76486 system_pods.go:89] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.489030   76486 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.489038   76486 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.489044   76486 system_pods.go:89] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.489050   76486 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.489062   76486 system_pods.go:89] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.489069   76486 system_pods.go:89] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.489092   76486 system_pods.go:126] duration metric: took 6.568035ms to wait for k8s-apps to be running ...
	I0828 18:26:30.489104   76486 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:30.489163   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:30.508336   76486 system_svc.go:56] duration metric: took 19.222473ms WaitForService to wait for kubelet
	I0828 18:26:30.508369   76486 kubeadm.go:582] duration metric: took 4m23.39138334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:30.508394   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:30.512219   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:30.512253   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:30.512267   76486 node_conditions.go:105] duration metric: took 3.866556ms to run NodePressure ...
	I0828 18:26:30.512282   76486 start.go:241] waiting for startup goroutines ...
	I0828 18:26:30.512291   76486 start.go:246] waiting for cluster config update ...
	I0828 18:26:30.512306   76486 start.go:255] writing updated cluster config ...
	I0828 18:26:30.512681   76486 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:30.579402   76486 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:30.581444   76486 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-640552" cluster and "default" namespace by default
	I0828 18:26:28.575075   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:30.576207   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:29.206147   76435 out.go:235]   - Booting up control plane ...
	I0828 18:26:29.206257   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:26:29.206365   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:26:29.206494   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:26:29.227031   76435 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:26:29.235149   76435 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:26:29.235246   76435 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:26:29.370272   76435 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 18:26:29.370462   76435 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 18:26:29.872896   76435 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733105ms
	I0828 18:26:29.872975   76435 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 18:26:34.877604   76435 kubeadm.go:310] [api-check] The API server is healthy after 5.002276684s
	I0828 18:26:34.892462   76435 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 18:26:34.905804   76435 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 18:26:34.932862   76435 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 18:26:34.933079   76435 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-014980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 18:26:34.944560   76435 kubeadm.go:310] [bootstrap-token] Using token: nwgkdo.9yj47woyyi233z66
	I0828 18:26:34.945933   76435 out.go:235]   - Configuring RBAC rules ...
	I0828 18:26:34.946052   76435 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 18:26:34.951430   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 18:26:34.963862   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 18:26:34.968038   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 18:26:34.971350   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 18:26:34.977521   76435 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 18:26:35.282249   76435 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 18:26:35.704101   76435 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 18:26:36.282971   76435 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 18:26:36.284216   76435 kubeadm.go:310] 
	I0828 18:26:36.284337   76435 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 18:26:36.284364   76435 kubeadm.go:310] 
	I0828 18:26:36.284457   76435 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 18:26:36.284470   76435 kubeadm.go:310] 
	I0828 18:26:36.284504   76435 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 18:26:36.284579   76435 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 18:26:36.284654   76435 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 18:26:36.284667   76435 kubeadm.go:310] 
	I0828 18:26:36.284748   76435 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 18:26:36.284758   76435 kubeadm.go:310] 
	I0828 18:26:36.284820   76435 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 18:26:36.284826   76435 kubeadm.go:310] 
	I0828 18:26:36.284891   76435 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 18:26:36.284988   76435 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 18:26:36.285081   76435 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 18:26:36.285091   76435 kubeadm.go:310] 
	I0828 18:26:36.285197   76435 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 18:26:36.285298   76435 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 18:26:36.285309   76435 kubeadm.go:310] 
	I0828 18:26:36.285414   76435 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285549   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 18:26:36.285572   76435 kubeadm.go:310] 	--control-plane 
	I0828 18:26:36.285577   76435 kubeadm.go:310] 
	I0828 18:26:36.285655   76435 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 18:26:36.285663   76435 kubeadm.go:310] 
	I0828 18:26:36.285757   76435 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285886   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 18:26:36.287195   76435 kubeadm.go:310] W0828 18:26:28.113155    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287529   76435 kubeadm.go:310] W0828 18:26:28.114038    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287633   76435 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:36.287659   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:26:36.287669   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:26:36.289019   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:26:33.075886   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:35.076651   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:36.290213   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:26:36.302171   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:26:36.326384   76435 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:26:36.326452   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:36.326522   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-014980 minikube.k8s.io/updated_at=2024_08_28T18_26_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=embed-certs-014980 minikube.k8s.io/primary=true
	I0828 18:26:36.537331   76435 ops.go:34] apiserver oom_adj: -16
	I0828 18:26:36.537497   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.038467   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.537529   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.038147   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.537854   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.038193   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.538325   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.037978   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.537503   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.038001   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.160327   76435 kubeadm.go:1113] duration metric: took 4.83392727s to wait for elevateKubeSystemPrivileges
	I0828 18:26:41.160366   76435 kubeadm.go:394] duration metric: took 5m2.080700509s to StartCluster
	I0828 18:26:41.160386   76435 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.160469   76435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:26:41.162122   76435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.162393   76435 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:26:41.162463   76435 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:26:41.162547   76435 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-014980"
	I0828 18:26:41.162563   76435 addons.go:69] Setting default-storageclass=true in profile "embed-certs-014980"
	I0828 18:26:41.162588   76435 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-014980"
	I0828 18:26:41.162586   76435 addons.go:69] Setting metrics-server=true in profile "embed-certs-014980"
	W0828 18:26:41.162599   76435 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:26:41.162610   76435 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-014980"
	I0828 18:26:41.162632   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162635   76435 addons.go:234] Setting addon metrics-server=true in "embed-certs-014980"
	W0828 18:26:41.162644   76435 addons.go:243] addon metrics-server should already be in state true
	I0828 18:26:41.162666   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162612   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:26:41.163042   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163054   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163083   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163095   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163140   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163160   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.164216   76435 out.go:177] * Verifying Kubernetes components...
	I0828 18:26:41.166298   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:26:41.178807   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0828 18:26:41.178914   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0828 18:26:41.179437   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179515   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179971   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.179994   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180168   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.180197   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180346   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180629   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180982   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181021   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.181761   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181810   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.182920   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
	I0828 18:26:41.183394   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.183877   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.183900   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.184252   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.184450   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.187788   76435 addons.go:234] Setting addon default-storageclass=true in "embed-certs-014980"
	W0828 18:26:41.187811   76435 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:26:41.187837   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.188210   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.188242   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.199469   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I0828 18:26:41.199977   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.200461   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.200487   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.200894   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.201121   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.201369   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0828 18:26:41.201749   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.202224   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.202243   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.202811   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.203024   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.203030   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.205127   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.205217   76435 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:26:41.206606   76435 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.206620   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:26:41.206633   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.206678   76435 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:26:37.575308   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:39.575726   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:41.207928   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:26:41.207951   76435 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:26:41.207971   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.208651   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0828 18:26:41.209208   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.210020   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.210040   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.210477   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.210537   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211056   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211089   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211123   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211166   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211313   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.211443   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.211572   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211588   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211580   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.211600   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.211636   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.211828   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211996   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.212159   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.212271   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.228122   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
	I0828 18:26:41.228552   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.229000   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.229016   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.229309   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.229565   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.231484   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.231721   76435 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.231732   76435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:26:41.231744   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.234525   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.234901   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.234933   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.235097   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.235259   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.235412   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.235585   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.375620   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:26:41.420534   76435 node_ready.go:35] waiting up to 6m0s for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429069   76435 node_ready.go:49] node "embed-certs-014980" has status "Ready":"True"
	I0828 18:26:41.429090   76435 node_ready.go:38] duration metric: took 8.530462ms for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429098   76435 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:41.438842   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:41.484936   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.535672   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.536914   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:26:41.536936   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:26:41.604181   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:26:41.604219   76435 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:26:41.654668   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.654695   76435 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:26:41.688039   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.921155   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921188   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921465   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:41.921544   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.921568   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921577   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921842   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921863   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.938676   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.938694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.938984   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.939034   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690412   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154689373s)
	I0828 18:26:42.690461   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690469   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.690766   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.690810   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690830   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690843   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.691076   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.691114   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.691122   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.722795   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.034719218s)
	I0828 18:26:42.722840   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.722852   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723141   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.723210   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723231   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723249   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.723261   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723539   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723567   76435 addons.go:475] Verifying addon metrics-server=true in "embed-certs-014980"
	I0828 18:26:42.725524   76435 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0828 18:26:42.726507   76435 addons.go:510] duration metric: took 1.564045136s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0828 18:26:41.576259   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:44.075008   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:46.075323   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:43.445262   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:45.445672   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:47.948313   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:48.446506   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.446527   76435 pod_ready.go:82] duration metric: took 7.007660638s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.446538   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451954   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.451973   76435 pod_ready.go:82] duration metric: took 5.430099ms for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451983   76435 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456910   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.456937   76435 pod_ready.go:82] duration metric: took 4.947692ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456948   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963231   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.963252   76435 pod_ready.go:82] duration metric: took 1.506296167s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963262   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967762   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.967780   76435 pod_ready.go:82] duration metric: took 4.511839ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967788   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043820   76435 pod_ready.go:93] pod "kube-proxy-hzw4m" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.043844   76435 pod_ready.go:82] duration metric: took 76.049661ms for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043855   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443261   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.443288   76435 pod_ready.go:82] duration metric: took 399.423823ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443298   76435 pod_ready.go:39] duration metric: took 9.014190636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:50.443315   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:50.443375   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:50.459400   76435 api_server.go:72] duration metric: took 9.296966752s to wait for apiserver process to appear ...
	I0828 18:26:50.459426   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:50.459448   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:26:50.463861   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:26:50.464779   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:50.464807   76435 api_server.go:131] duration metric: took 5.370633ms to wait for apiserver health ...
	I0828 18:26:50.464817   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:50.645588   76435 system_pods.go:59] 9 kube-system pods found
	I0828 18:26:50.645620   76435 system_pods.go:61] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:50.645626   76435 system_pods.go:61] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:50.645629   76435 system_pods.go:61] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:50.645633   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:50.645636   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:50.645639   76435 system_pods.go:61] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:50.645642   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:50.645647   76435 system_pods.go:61] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:50.645651   76435 system_pods.go:61] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:50.645658   76435 system_pods.go:74] duration metric: took 180.831741ms to wait for pod list to return data ...
	I0828 18:26:50.645664   76435 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:50.844171   76435 default_sa.go:45] found service account: "default"
	I0828 18:26:50.844205   76435 default_sa.go:55] duration metric: took 198.534118ms for default service account to be created ...
	I0828 18:26:50.844217   76435 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:51.045810   76435 system_pods.go:86] 9 kube-system pods found
	I0828 18:26:51.045839   76435 system_pods.go:89] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:51.045844   76435 system_pods.go:89] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:51.045848   76435 system_pods.go:89] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:51.045852   76435 system_pods.go:89] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:51.045856   76435 system_pods.go:89] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:51.045859   76435 system_pods.go:89] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:51.045865   76435 system_pods.go:89] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:51.045871   76435 system_pods.go:89] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:51.045874   76435 system_pods.go:89] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:51.045882   76435 system_pods.go:126] duration metric: took 201.659747ms to wait for k8s-apps to be running ...
	I0828 18:26:51.045889   76435 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:51.045930   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:51.060123   76435 system_svc.go:56] duration metric: took 14.22252ms WaitForService to wait for kubelet
	I0828 18:26:51.060159   76435 kubeadm.go:582] duration metric: took 9.897729666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:51.060184   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:51.244017   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:51.244042   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:51.244052   76435 node_conditions.go:105] duration metric: took 183.862561ms to run NodePressure ...
	I0828 18:26:51.244063   76435 start.go:241] waiting for startup goroutines ...
	I0828 18:26:51.244069   76435 start.go:246] waiting for cluster config update ...
	I0828 18:26:51.244080   76435 start.go:255] writing updated cluster config ...
	I0828 18:26:51.244398   76435 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:51.291241   76435 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:51.293227   76435 out.go:177] * Done! kubectl is now configured to use "embed-certs-014980" cluster and "default" namespace by default
	I0828 18:26:48.075513   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:50.576810   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:53.075100   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:55.075381   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:57.076055   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:59.575251   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:01.575306   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:04.075576   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.076392   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.575514   75908 pod_ready.go:82] duration metric: took 4m0.006537109s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:27:06.575539   75908 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:27:06.575549   75908 pod_ready.go:39] duration metric: took 4m3.208242253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:27:06.575566   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:27:06.575596   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:06.575649   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:06.625222   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:06.625247   75908 cri.go:89] found id: ""
	I0828 18:27:06.625257   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:06.625317   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.629941   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:06.630003   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:06.665372   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:06.665400   75908 cri.go:89] found id: ""
	I0828 18:27:06.665410   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:06.665472   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.669511   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:06.669599   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:06.709706   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:06.709734   75908 cri.go:89] found id: ""
	I0828 18:27:06.709742   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:06.709801   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.713964   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:06.714023   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:06.748110   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:06.748136   75908 cri.go:89] found id: ""
	I0828 18:27:06.748158   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:06.748217   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.752020   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:06.752087   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:06.788455   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:06.788476   75908 cri.go:89] found id: ""
	I0828 18:27:06.788483   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:06.788537   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.792710   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:06.792779   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:06.830031   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:06.830055   75908 cri.go:89] found id: ""
	I0828 18:27:06.830065   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:06.830147   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.833910   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:06.833970   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:06.869172   75908 cri.go:89] found id: ""
	I0828 18:27:06.869199   75908 logs.go:276] 0 containers: []
	W0828 18:27:06.869210   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:06.869217   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:06.869281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:06.906605   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:06.906626   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:06.906632   75908 cri.go:89] found id: ""
	I0828 18:27:06.906644   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:06.906705   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.911374   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.915494   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:06.915515   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:06.961094   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:06.961128   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:07.018511   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:07.018543   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:07.058413   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:07.058443   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:07.098028   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:07.098055   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:07.136706   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:07.136731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:07.203021   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:07.203059   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:07.239714   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:07.239758   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:07.746282   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:07.746326   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:07.812731   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:07.812771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:07.828453   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:07.828484   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:07.967513   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:07.967610   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:08.013719   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:08.013745   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.553418   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:27:10.569945   75908 api_server.go:72] duration metric: took 4m14.476728398s to wait for apiserver process to appear ...
	I0828 18:27:10.569977   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:27:10.570010   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:10.570057   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:10.605869   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:10.605899   75908 cri.go:89] found id: ""
	I0828 18:27:10.605908   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:10.606013   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.609868   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:10.609949   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:10.647627   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:10.647655   75908 cri.go:89] found id: ""
	I0828 18:27:10.647664   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:10.647721   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.651916   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:10.651980   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:10.690782   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:10.690805   75908 cri.go:89] found id: ""
	I0828 18:27:10.690815   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:10.690870   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.694896   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:10.694944   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:10.735502   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:10.735530   75908 cri.go:89] found id: ""
	I0828 18:27:10.735541   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:10.735603   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.739627   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:10.739702   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:10.776213   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:10.776233   75908 cri.go:89] found id: ""
	I0828 18:27:10.776240   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:10.776293   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.779889   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:10.779948   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:10.815919   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:10.815949   75908 cri.go:89] found id: ""
	I0828 18:27:10.815958   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:10.816022   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.820317   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:10.820385   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:10.859049   75908 cri.go:89] found id: ""
	I0828 18:27:10.859077   75908 logs.go:276] 0 containers: []
	W0828 18:27:10.859085   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:10.859091   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:10.859138   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:10.894511   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.894543   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.894549   75908 cri.go:89] found id: ""
	I0828 18:27:10.894558   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:10.894616   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.899725   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.907315   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:10.907339   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.941374   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:10.941401   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:11.372069   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:11.372111   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:11.425168   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:11.425192   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:11.439748   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:11.439771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:11.484252   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:11.484278   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:11.522975   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:11.523000   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:11.590753   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:11.590797   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:11.629694   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:11.629725   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:11.667597   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:11.667627   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:11.732423   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:11.732469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:11.841885   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:11.841929   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:11.885703   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:11.885741   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.428276   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:27:14.433359   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:27:14.434430   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:27:14.434448   75908 api_server.go:131] duration metric: took 3.864464723s to wait for apiserver health ...
	I0828 18:27:14.434458   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:27:14.434487   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:14.434545   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:14.472125   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.472153   75908 cri.go:89] found id: ""
	I0828 18:27:14.472163   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:14.472225   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.476217   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:14.476281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:14.514886   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:14.514904   75908 cri.go:89] found id: ""
	I0828 18:27:14.514911   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:14.514965   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.518930   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:14.519000   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:14.556279   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.556302   75908 cri.go:89] found id: ""
	I0828 18:27:14.556311   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:14.556356   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.560542   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:14.560612   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:14.604981   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:14.605008   75908 cri.go:89] found id: ""
	I0828 18:27:14.605017   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:14.605076   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.608769   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:14.608833   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:14.644014   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:14.644036   75908 cri.go:89] found id: ""
	I0828 18:27:14.644044   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:14.644089   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.648138   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:14.648211   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:14.686898   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:14.686919   75908 cri.go:89] found id: ""
	I0828 18:27:14.686926   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:14.686971   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.690752   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:14.690818   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:14.723146   75908 cri.go:89] found id: ""
	I0828 18:27:14.723174   75908 logs.go:276] 0 containers: []
	W0828 18:27:14.723185   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:14.723200   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:14.723264   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:14.758168   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.758196   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:14.758202   75908 cri.go:89] found id: ""
	I0828 18:27:14.758212   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:14.758269   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.761928   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.765388   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:14.765407   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.798567   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:14.798598   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:14.841992   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:14.842024   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:14.947020   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:14.947050   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.996788   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:14.996815   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:15.031706   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:15.031731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:15.065813   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:15.065839   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:15.121439   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:15.121469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:15.535661   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:15.535709   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:15.603334   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:15.603374   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:15.619628   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:15.619657   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:15.661179   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:15.661203   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:15.697954   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:15.697983   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:18.238105   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:27:18.238137   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.238144   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.238149   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.238154   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.238158   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.238163   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.238171   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.238177   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.238187   75908 system_pods.go:74] duration metric: took 3.803722719s to wait for pod list to return data ...
	I0828 18:27:18.238198   75908 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:27:18.240936   75908 default_sa.go:45] found service account: "default"
	I0828 18:27:18.240955   75908 default_sa.go:55] duration metric: took 2.749733ms for default service account to be created ...
	I0828 18:27:18.240963   75908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:27:18.245768   75908 system_pods.go:86] 8 kube-system pods found
	I0828 18:27:18.245793   75908 system_pods.go:89] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.245800   75908 system_pods.go:89] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.245806   75908 system_pods.go:89] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.245810   75908 system_pods.go:89] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.245815   75908 system_pods.go:89] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.245820   75908 system_pods.go:89] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.245829   75908 system_pods.go:89] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.245838   75908 system_pods.go:89] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.245851   75908 system_pods.go:126] duration metric: took 4.881291ms to wait for k8s-apps to be running ...
	I0828 18:27:18.245862   75908 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:27:18.245909   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:27:18.260429   75908 system_svc.go:56] duration metric: took 14.56108ms WaitForService to wait for kubelet
	I0828 18:27:18.260458   75908 kubeadm.go:582] duration metric: took 4m22.167245383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:27:18.260489   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:27:18.262765   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:27:18.262784   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:27:18.262793   75908 node_conditions.go:105] duration metric: took 2.299468ms to run NodePressure ...
	I0828 18:27:18.262803   75908 start.go:241] waiting for startup goroutines ...
	I0828 18:27:18.262810   75908 start.go:246] waiting for cluster config update ...
	I0828 18:27:18.262820   75908 start.go:255] writing updated cluster config ...
	I0828 18:27:18.263070   75908 ssh_runner.go:195] Run: rm -f paused
	I0828 18:27:18.312755   75908 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:27:18.314827   75908 out.go:177] * Done! kubectl is now configured to use "no-preload-072854" cluster and "default" namespace by default
	I0828 18:28:25.556329   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:28:25.556449   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:28:25.558031   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:28:25.558117   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:28:25.558222   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:28:25.558363   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:28:25.558517   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:28:25.558594   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:28:25.561046   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:28:25.561124   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:28:25.561179   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:28:25.561288   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:28:25.561384   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:28:25.561489   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:28:25.561562   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:28:25.561797   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:28:25.561914   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:28:25.562010   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:28:25.562230   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:28:25.562294   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:28:25.562402   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:28:25.562478   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:28:25.562554   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:28:25.562706   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:28:25.562818   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:28:25.562926   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:28:25.563006   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:28:25.563043   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:28:25.563144   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:28:25.564527   77396 out.go:235]   - Booting up control plane ...
	I0828 18:28:25.564629   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:28:25.564716   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:28:25.564816   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:28:25.564929   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:28:25.565154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:28:25.565226   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:28:25.565326   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565541   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.565660   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565895   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566002   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566184   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566245   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566411   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566473   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566629   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566636   77396 kubeadm.go:310] 
	I0828 18:28:25.566672   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:28:25.566706   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:28:25.566712   77396 kubeadm.go:310] 
	I0828 18:28:25.566740   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:28:25.566769   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:28:25.566881   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:28:25.566893   77396 kubeadm.go:310] 
	I0828 18:28:25.567033   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:28:25.567080   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:28:25.567126   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:28:25.567142   77396 kubeadm.go:310] 
	I0828 18:28:25.567276   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:28:25.567351   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:28:25.567358   77396 kubeadm.go:310] 
	I0828 18:28:25.567461   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:28:25.567534   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:28:25.567612   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:28:25.567689   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:28:25.567726   77396 kubeadm.go:310] 
	W0828 18:28:25.567820   77396 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0828 18:28:25.567858   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:28:26.036779   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:28:26.051771   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:28:26.060912   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:28:26.060932   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:28:26.060971   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:28:26.069420   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:28:26.069486   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:28:26.078268   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:28:26.086594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:28:26.086669   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:28:26.095756   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.104747   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:28:26.104809   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.113847   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:28:26.122600   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:28:26.122673   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:28:26.131697   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:28:26.338828   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:30:22.315132   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:30:22.315271   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:30:22.316887   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:30:22.316970   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:30:22.317067   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:30:22.317199   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:30:22.317289   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:30:22.317340   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:30:22.319318   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:30:22.319406   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:30:22.319461   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:30:22.319540   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:30:22.319620   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:30:22.319715   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:30:22.319791   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:30:22.319888   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:30:22.319972   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:30:22.320068   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:30:22.320161   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:30:22.320232   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:30:22.320312   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:30:22.320362   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:30:22.320411   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:30:22.320468   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:30:22.320511   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:30:22.320627   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:30:22.320748   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:30:22.320805   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:30:22.320922   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:30:22.322522   77396 out.go:235]   - Booting up control plane ...
	I0828 18:30:22.322640   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:30:22.322739   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:30:22.322843   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:30:22.322939   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:30:22.323154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:30:22.323234   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:30:22.323320   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323518   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323616   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323851   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323947   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324157   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324215   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324383   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324448   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324605   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324614   77396 kubeadm.go:310] 
	I0828 18:30:22.324651   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:30:22.324685   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:30:22.324694   77396 kubeadm.go:310] 
	I0828 18:30:22.324726   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:30:22.324755   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:30:22.324846   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:30:22.324853   77396 kubeadm.go:310] 
	I0828 18:30:22.324939   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:30:22.324971   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:30:22.325003   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:30:22.325009   77396 kubeadm.go:310] 
	I0828 18:30:22.325137   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:30:22.325259   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:30:22.325271   77396 kubeadm.go:310] 
	I0828 18:30:22.325394   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:30:22.325485   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:30:22.325599   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:30:22.325707   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:30:22.325725   77396 kubeadm.go:310] 
	I0828 18:30:22.325793   77396 kubeadm.go:394] duration metric: took 8m1.985321645s to StartCluster
	I0828 18:30:22.325845   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:30:22.325912   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:30:22.369637   77396 cri.go:89] found id: ""
	I0828 18:30:22.369669   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.369680   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:30:22.369687   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:30:22.369748   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:30:22.404363   77396 cri.go:89] found id: ""
	I0828 18:30:22.404395   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.404404   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:30:22.404412   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:30:22.404477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:30:22.439923   77396 cri.go:89] found id: ""
	I0828 18:30:22.439949   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.439956   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:30:22.439962   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:30:22.440016   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:30:22.480139   77396 cri.go:89] found id: ""
	I0828 18:30:22.480169   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.480186   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:30:22.480195   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:30:22.480255   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:30:22.517020   77396 cri.go:89] found id: ""
	I0828 18:30:22.517053   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.517064   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:30:22.517075   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:30:22.517151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:30:22.551369   77396 cri.go:89] found id: ""
	I0828 18:30:22.551391   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.551399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:30:22.551409   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:30:22.551458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:30:22.585656   77396 cri.go:89] found id: ""
	I0828 18:30:22.585686   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.585697   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:30:22.585704   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:30:22.585781   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:30:22.620157   77396 cri.go:89] found id: ""
	I0828 18:30:22.620190   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.620201   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:30:22.620212   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:30:22.620230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:30:22.634209   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:30:22.634245   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:30:22.711047   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:30:22.711082   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:30:22.711096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:30:22.816037   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:30:22.816075   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:30:22.885999   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:30:22.886029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 18:30:22.936793   77396 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0828 18:30:22.936856   77396 out.go:270] * 
	W0828 18:30:22.936920   77396 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.936941   77396 out.go:270] * 
	W0828 18:30:22.937749   77396 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:30:22.941026   77396 out.go:201] 
	W0828 18:30:22.942189   77396 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.942300   77396 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0828 18:30:22.942335   77396 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0828 18:30:22.943829   77396 out.go:201] 
	
	
	==> CRI-O <==
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.681462548Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870132681439889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a64a7f10-10a2-435c-811e-d7ed6ead6c48 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.682189172Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a0fe309-3d22-4c44-b302-3ef0839aa6f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.682241676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a0fe309-3d22-4c44-b302-3ef0839aa6f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.682662714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869355659557944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e194ddce09e9049094d2848c807efa8388a9e0e07a1f1b2e4bd4bcb33e5f5ea,PodSandboxId:d007dc2e2c3a31bb7df7222f791e9126fa7ee1311a769dfdb4d08503b02e7b0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869335696268301,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f826550a-fcfa-4f39-9c73-44834e6e4721,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db,PodSandboxId:ae6e975f4de6504de0ce883436df054f3c65a21194f003f6049cbc88b36f6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869332522555962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t5lx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a7dcfb-266b-4eb2-bdfb-e8153da41df1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869324800568946,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee,PodSandboxId:9d02549c1f5435d119bcd657d9af568c60692fb748c19ae93aae257bbfda3612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869324803349140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmpft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddc57ae-4f38-4fd3-aa82
-5552ba727d88,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143,PodSandboxId:ad350345059c3492da2c02f8e20182d914adedbf18a1664949f3f48720490f98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869321241027158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be64182f68d88a91e8f5a225d2d1d695,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342,PodSandboxId:9e3e4c6602381bacf00a6cd2c0d9959ad0ee129416447c787a729a1fbd6673c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869321228710657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf8dc78d48c852701ab852fe447b50,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286,PodSandboxId:1489f217ae6f57711d39b42e40ad1ea0982809ba36be08c8eecb2ecc826c523d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869321222829800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fa28440663f54746801eb6a944d
ea8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12,PodSandboxId:cc29489220c3640aede6abf03ade44624e43725cb746d7994be3c5d45eeb7111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869321208847863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b3be8f1c9d0b215d7fcb36c3f9a97
6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a0fe309-3d22-4c44-b302-3ef0839aa6f3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.715980390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=75416190-1f54-43e9-9d7d-c5d5819ec1c1 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.716052658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75416190-1f54-43e9-9d7d-c5d5819ec1c1 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.717514708Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12904fd3-60b2-4caf-aeda-531ecfcd7b59 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.717929092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870132717906255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12904fd3-60b2-4caf-aeda-531ecfcd7b59 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.718410458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c164452d-dbb0-45a9-9822-f300bb077135 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.718460709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c164452d-dbb0-45a9-9822-f300bb077135 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.718646022Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869355659557944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e194ddce09e9049094d2848c807efa8388a9e0e07a1f1b2e4bd4bcb33e5f5ea,PodSandboxId:d007dc2e2c3a31bb7df7222f791e9126fa7ee1311a769dfdb4d08503b02e7b0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869335696268301,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f826550a-fcfa-4f39-9c73-44834e6e4721,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db,PodSandboxId:ae6e975f4de6504de0ce883436df054f3c65a21194f003f6049cbc88b36f6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869332522555962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t5lx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a7dcfb-266b-4eb2-bdfb-e8153da41df1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869324800568946,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee,PodSandboxId:9d02549c1f5435d119bcd657d9af568c60692fb748c19ae93aae257bbfda3612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869324803349140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmpft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddc57ae-4f38-4fd3-aa82
-5552ba727d88,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143,PodSandboxId:ad350345059c3492da2c02f8e20182d914adedbf18a1664949f3f48720490f98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869321241027158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be64182f68d88a91e8f5a225d2d1d695,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342,PodSandboxId:9e3e4c6602381bacf00a6cd2c0d9959ad0ee129416447c787a729a1fbd6673c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869321228710657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf8dc78d48c852701ab852fe447b50,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286,PodSandboxId:1489f217ae6f57711d39b42e40ad1ea0982809ba36be08c8eecb2ecc826c523d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869321222829800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fa28440663f54746801eb6a944d
ea8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12,PodSandboxId:cc29489220c3640aede6abf03ade44624e43725cb746d7994be3c5d45eeb7111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869321208847863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b3be8f1c9d0b215d7fcb36c3f9a97
6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c164452d-dbb0-45a9-9822-f300bb077135 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.754260441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5bdbf8ec-9c9e-44db-bf76-0adaa227577f name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.754526074Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5bdbf8ec-9c9e-44db-bf76-0adaa227577f name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.755658126Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8bf6a6ac-764b-4384-8b09-da5413871bff name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.756344373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870132756319043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8bf6a6ac-764b-4384-8b09-da5413871bff name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.757041228Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3c7ba5a-7507-437e-ad51-d1175a33fcb6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.757106769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3c7ba5a-7507-437e-ad51-d1175a33fcb6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.757485502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869355659557944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e194ddce09e9049094d2848c807efa8388a9e0e07a1f1b2e4bd4bcb33e5f5ea,PodSandboxId:d007dc2e2c3a31bb7df7222f791e9126fa7ee1311a769dfdb4d08503b02e7b0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869335696268301,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f826550a-fcfa-4f39-9c73-44834e6e4721,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db,PodSandboxId:ae6e975f4de6504de0ce883436df054f3c65a21194f003f6049cbc88b36f6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869332522555962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t5lx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a7dcfb-266b-4eb2-bdfb-e8153da41df1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869324800568946,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee,PodSandboxId:9d02549c1f5435d119bcd657d9af568c60692fb748c19ae93aae257bbfda3612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869324803349140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmpft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddc57ae-4f38-4fd3-aa82
-5552ba727d88,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143,PodSandboxId:ad350345059c3492da2c02f8e20182d914adedbf18a1664949f3f48720490f98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869321241027158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be64182f68d88a91e8f5a225d2d1d695,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342,PodSandboxId:9e3e4c6602381bacf00a6cd2c0d9959ad0ee129416447c787a729a1fbd6673c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869321228710657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf8dc78d48c852701ab852fe447b50,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286,PodSandboxId:1489f217ae6f57711d39b42e40ad1ea0982809ba36be08c8eecb2ecc826c523d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869321222829800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fa28440663f54746801eb6a944d
ea8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12,PodSandboxId:cc29489220c3640aede6abf03ade44624e43725cb746d7994be3c5d45eeb7111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869321208847863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b3be8f1c9d0b215d7fcb36c3f9a97
6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3c7ba5a-7507-437e-ad51-d1175a33fcb6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.803954007Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27e076ab-a5d6-4e43-801e-9ff118c3bbf6 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.804056294Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27e076ab-a5d6-4e43-801e-9ff118c3bbf6 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.805523528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd05b88b-4cce-4e79-866c-59b13e8d8f8c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.806124413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870132806100500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd05b88b-4cce-4e79-866c-59b13e8d8f8c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.806774652Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=215e7253-65de-4c11-ad94-cfcb490f3538 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.806866001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=215e7253-65de-4c11-ad94-cfcb490f3538 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:32 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:35:32.807539886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869355659557944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e194ddce09e9049094d2848c807efa8388a9e0e07a1f1b2e4bd4bcb33e5f5ea,PodSandboxId:d007dc2e2c3a31bb7df7222f791e9126fa7ee1311a769dfdb4d08503b02e7b0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869335696268301,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f826550a-fcfa-4f39-9c73-44834e6e4721,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db,PodSandboxId:ae6e975f4de6504de0ce883436df054f3c65a21194f003f6049cbc88b36f6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869332522555962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t5lx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a7dcfb-266b-4eb2-bdfb-e8153da41df1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869324800568946,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee,PodSandboxId:9d02549c1f5435d119bcd657d9af568c60692fb748c19ae93aae257bbfda3612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869324803349140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmpft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddc57ae-4f38-4fd3-aa82
-5552ba727d88,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143,PodSandboxId:ad350345059c3492da2c02f8e20182d914adedbf18a1664949f3f48720490f98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869321241027158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be64182f68d88a91e8f5a225d2d1d695,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342,PodSandboxId:9e3e4c6602381bacf00a6cd2c0d9959ad0ee129416447c787a729a1fbd6673c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869321228710657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf8dc78d48c852701ab852fe447b50,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286,PodSandboxId:1489f217ae6f57711d39b42e40ad1ea0982809ba36be08c8eecb2ecc826c523d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869321222829800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fa28440663f54746801eb6a944d
ea8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12,PodSandboxId:cc29489220c3640aede6abf03ade44624e43725cb746d7994be3c5d45eeb7111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869321208847863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b3be8f1c9d0b215d7fcb36c3f9a97
6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=215e7253-65de-4c11-ad94-cfcb490f3538 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	02d2a37fd69e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   6cac35685c17c       storage-provisioner
	9e194ddce09e9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   d007dc2e2c3a3       busybox
	93284522e6de6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   ae6e975f4de65       coredns-6f6b679f8f-t5lx6
	729f7a235e3df       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   9d02549c1f543       kube-proxy-lmpft
	48533565061e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   6cac35685c17c       storage-provisioner
	3895a4d3fb7d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   ad350345059c3       etcd-default-k8s-diff-port-640552
	d4b3a88fe2356       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   9e3e4c6602381       kube-apiserver-default-k8s-diff-port-640552
	1d1212a86ca9a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   1489f217ae6f5       kube-controller-manager-default-k8s-diff-port-640552
	101c4701cc860       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   cc29489220c36       kube-scheduler-default-k8s-diff-port-640552
	
	
	==> coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47719 - 32955 "HINFO IN 72317959396472030.9198756957633981570. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.01057349s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-640552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-640552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=default-k8s-diff-port-640552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T18_14_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 18:14:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-640552
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 18:35:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 18:32:47 +0000   Wed, 28 Aug 2024 18:14:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 18:32:47 +0000   Wed, 28 Aug 2024 18:14:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 18:32:47 +0000   Wed, 28 Aug 2024 18:14:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 18:32:47 +0000   Wed, 28 Aug 2024 18:22:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    default-k8s-diff-port-640552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 53c2b777c7454982bb99ff6c37b0f2c6
	  System UUID:                53c2b777-c745-4982-bb99-ff6c37b0f2c6
	  Boot ID:                    4d8cbfc2-df06-4ef4-b068-829fcdbebf68
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-6f6b679f8f-t5lx6                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-640552                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-640552             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-640552    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-lmpft                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-640552             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-lccm2                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-640552 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-640552 event: Registered Node default-k8s-diff-port-640552 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-640552 event: Registered Node default-k8s-diff-port-640552 in Controller
	
	
	==> dmesg <==
	[Aug28 18:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052887] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044537] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.839421] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.912289] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.536202] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.263565] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.063267] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049615] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.193764] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.119487] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.270133] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[  +4.006864] systemd-fstab-generator[785]: Ignoring "noauto" option for root device
	[  +1.798911] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.059563] kauditd_printk_skb: 158 callbacks suppressed
	[Aug28 18:22] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.453029] systemd-fstab-generator[1538]: Ignoring "noauto" option for root device
	[  +3.255197] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.279446] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] <==
	{"level":"info","ts":"2024-08-28T18:22:02.982408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-28T18:22:02.982525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-28T18:22:02.982565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 received MsgPreVoteResp from 9e3e2863ac888927 at term 2"}
	{"level":"info","ts":"2024-08-28T18:22:02.982613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 became candidate at term 3"}
	{"level":"info","ts":"2024-08-28T18:22:02.982648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 received MsgVoteResp from 9e3e2863ac888927 at term 3"}
	{"level":"info","ts":"2024-08-28T18:22:02.982686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9e3e2863ac888927 became leader at term 3"}
	{"level":"info","ts":"2024-08-28T18:22:02.982717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9e3e2863ac888927 elected leader 9e3e2863ac888927 at term 3"}
	{"level":"info","ts":"2024-08-28T18:22:02.988592Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9e3e2863ac888927","local-member-attributes":"{Name:default-k8s-diff-port-640552 ClientURLs:[https://192.168.39.226:2379]}","request-path":"/0/members/9e3e2863ac888927/attributes","cluster-id":"5e6abf1d35eec4c5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T18:22:02.988606Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T18:22:02.988807Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T18:22:02.988840Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T18:22:02.988629Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T18:22:02.989721Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:22:02.989726Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:22:02.990973Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.226:2379"}
	{"level":"info","ts":"2024-08-28T18:22:02.991923Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-28T18:22:18.684027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.933418ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T18:22:18.684169Z","caller":"traceutil/trace.go:171","msg":"trace[285420362] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:578; }","duration":"121.113141ms","start":"2024-08-28T18:22:18.563041Z","end":"2024-08-28T18:22:18.684155Z","steps":["trace[285420362] 'range keys from in-memory index tree'  (duration: 120.867756ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T18:22:19.386699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.658415ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9883027998857457561 > lease_revoke:<id:0927919a31fb0b0c>","response":"size:28"}
	{"level":"info","ts":"2024-08-28T18:22:19.387458Z","caller":"traceutil/trace.go:171","msg":"trace[1534868223] linearizableReadLoop","detail":"{readStateIndex:613; appliedIndex:612; }","duration":"156.256622ms","start":"2024-08-28T18:22:19.231150Z","end":"2024-08-28T18:22:19.387407Z","steps":["trace[1534868223] 'read index received'  (duration: 45.23µs)","trace[1534868223] 'applied index is now lower than readState.Index'  (duration: 156.209878ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T18:22:19.387771Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.59981ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-640552\" ","response":"range_response_count:1 size:5530"}
	{"level":"info","ts":"2024-08-28T18:22:19.387884Z","caller":"traceutil/trace.go:171","msg":"trace[1444759853] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-640552; range_end:; response_count:1; response_revision:578; }","duration":"156.741966ms","start":"2024-08-28T18:22:19.231131Z","end":"2024-08-28T18:22:19.387873Z","steps":["trace[1444759853] 'agreement among raft nodes before linearized reading'  (duration: 156.489564ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T18:32:03.021090Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":822}
	{"level":"info","ts":"2024-08-28T18:32:03.031790Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":822,"took":"9.943218ms","hash":1483018848,"current-db-size-bytes":2715648,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2715648,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-28T18:32:03.031903Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1483018848,"revision":822,"compact-revision":-1}
	
	
	==> kernel <==
	 18:35:33 up 13 min,  0 users,  load average: 0.27, 0.18, 0.10
	Linux default-k8s-diff-port-640552 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0828 18:32:05.225919       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:32:05.225978       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0828 18:32:05.226972       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:32:05.227024       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:33:05.230925       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:33:05.231049       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0828 18:33:05.230983       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:33:05.231118       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:33:05.232362       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:33:05.232424       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:35:05.232928       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:35:05.233057       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0828 18:35:05.232928       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:35:05.233157       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:35:05.234340       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:35:05.234349       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] <==
	E0828 18:30:07.853143       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:30:08.282203       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:30:37.859470       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:30:38.292807       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:31:07.865916       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:31:08.301338       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:31:37.872903       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:31:38.310086       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:32:07.879555       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:32:08.318098       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:32:37.885882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:32:38.325664       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:32:47.063872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-640552"
	E0828 18:33:07.892200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:33:08.333517       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:33:20.451818       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="241.431µs"
	I0828 18:33:33.452546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="85.141µs"
	E0828 18:33:37.899030       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:33:38.340648       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:34:07.904931       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:34:08.348405       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:34:37.910669       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:34:38.354869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:35:07.917033       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:35:08.362657       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 18:22:04.977185       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 18:22:04.988102       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.226"]
	E0828 18:22:04.988231       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 18:22:05.016626       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 18:22:05.016668       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 18:22:05.016693       1 server_linux.go:169] "Using iptables Proxier"
	I0828 18:22:05.019331       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 18:22:05.019617       1 server.go:483] "Version info" version="v1.31.0"
	I0828 18:22:05.019629       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:22:05.020975       1 config.go:197] "Starting service config controller"
	I0828 18:22:05.021051       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 18:22:05.021098       1 config.go:104] "Starting endpoint slice config controller"
	I0828 18:22:05.021134       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 18:22:05.021934       1 config.go:326] "Starting node config controller"
	I0828 18:22:05.023571       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 18:22:05.122391       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 18:22:05.122514       1 shared_informer.go:320] Caches are synced for service config
	I0828 18:22:05.125378       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] <==
	I0828 18:22:02.102035       1 serving.go:386] Generated self-signed cert in-memory
	W0828 18:22:04.214002       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 18:22:04.214137       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 18:22:04.214167       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 18:22:04.214220       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 18:22:04.243756       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0828 18:22:04.243877       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:22:04.246437       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 18:22:04.246521       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 18:22:04.247885       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0828 18:22:04.248557       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0828 18:22:04.347268       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 18:34:24 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:24.436032     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lccm2" podUID="a8729f4d-7653-42f2-bcdc-0b95f4aa7080"
	Aug 28 18:34:29 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:29.702955     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870069702439383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:34:29 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:29.703248     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870069702439383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:34:36 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:36.435962     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lccm2" podUID="a8729f4d-7653-42f2-bcdc-0b95f4aa7080"
	Aug 28 18:34:39 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:39.705508     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870079705044221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:34:39 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:39.705561     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870079705044221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:34:49 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:49.438653     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lccm2" podUID="a8729f4d-7653-42f2-bcdc-0b95f4aa7080"
	Aug 28 18:34:49 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:49.707418     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870089707007428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:34:49 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:49.707538     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870089707007428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:34:59 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:59.451567     911 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 18:34:59 default-k8s-diff-port-640552 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 18:34:59 default-k8s-diff-port-640552 kubelet[911]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 18:34:59 default-k8s-diff-port-640552 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 18:34:59 default-k8s-diff-port-640552 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 18:34:59 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:59.710507     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870099709836756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:34:59 default-k8s-diff-port-640552 kubelet[911]: E0828 18:34:59.710546     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870099709836756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:02 default-k8s-diff-port-640552 kubelet[911]: E0828 18:35:02.435761     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lccm2" podUID="a8729f4d-7653-42f2-bcdc-0b95f4aa7080"
	Aug 28 18:35:09 default-k8s-diff-port-640552 kubelet[911]: E0828 18:35:09.713321     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870109712822233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:09 default-k8s-diff-port-640552 kubelet[911]: E0828 18:35:09.713482     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870109712822233,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:17 default-k8s-diff-port-640552 kubelet[911]: E0828 18:35:17.439249     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lccm2" podUID="a8729f4d-7653-42f2-bcdc-0b95f4aa7080"
	Aug 28 18:35:19 default-k8s-diff-port-640552 kubelet[911]: E0828 18:35:19.715894     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870119715544217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:19 default-k8s-diff-port-640552 kubelet[911]: E0828 18:35:19.716414     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870119715544217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:29 default-k8s-diff-port-640552 kubelet[911]: E0828 18:35:29.718252     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870129717982937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:29 default-k8s-diff-port-640552 kubelet[911]: E0828 18:35:29.718322     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870129717982937,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:30 default-k8s-diff-port-640552 kubelet[911]: E0828 18:35:30.435839     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lccm2" podUID="a8729f4d-7653-42f2-bcdc-0b95f4aa7080"
	
	
	==> storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] <==
	I0828 18:22:35.755962       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 18:22:35.765929       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 18:22:35.765994       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 18:22:53.168809       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 18:22:53.169983       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8a2adc2-ab3a-4591-a40e-ec62266e56ac", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-640552_465f6b51-f0c6-437b-8c88-cbba8bf75686 became leader
	I0828 18:22:53.170160       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-640552_465f6b51-f0c6-437b-8c88-cbba8bf75686!
	I0828 18:22:53.270678       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-640552_465f6b51-f0c6-437b-8c88-cbba8bf75686!
	
	
	==> storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] <==
	I0828 18:22:04.906425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0828 18:22:34.908831       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-640552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-lccm2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-640552 describe pod metrics-server-6867b74b74-lccm2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-640552 describe pod metrics-server-6867b74b74-lccm2: exit status 1 (61.857662ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-lccm2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-640552 describe pod metrics-server-6867b74b74-lccm2: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0828 18:26:55.397005   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-014980 -n embed-certs-014980
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-28 18:35:51.810580437 +0000 UTC m=+6270.861142427
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-014980 -n embed-certs-014980
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-014980 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-014980 logs -n 25: (2.061628942s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo find                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo crio                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-647068                                       | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:14 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-072854             | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-014980            | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-640552  | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-072854                  | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC | 28 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-131737        | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-014980                 | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-640552       | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-131737             | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 18:18:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 18:18:45.197319   77396 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:18:45.197606   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197616   77396 out.go:358] Setting ErrFile to fd 2...
	I0828 18:18:45.197621   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197793   77396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:18:45.198351   77396 out.go:352] Setting JSON to false
	I0828 18:18:45.199218   77396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7271,"bootTime":1724861854,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:18:45.199316   77396 start.go:139] virtualization: kvm guest
	I0828 18:18:45.201168   77396 out.go:177] * [old-k8s-version-131737] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:18:45.202252   77396 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:18:45.202312   77396 notify.go:220] Checking for updates...
	I0828 18:18:45.204563   77396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:18:45.205713   77396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:18:45.206652   77396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:18:45.207806   77396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:18:45.208891   77396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:18:45.210308   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:18:45.210717   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.210780   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.225409   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0828 18:18:45.225806   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.226318   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.226338   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.226722   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.226903   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.228685   77396 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 18:18:45.229863   77396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:18:45.230199   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.230243   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.245150   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0828 18:18:45.245641   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.246164   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.246199   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.246486   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.246677   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.282499   77396 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 18:18:45.283789   77396 start.go:297] selected driver: kvm2
	I0828 18:18:45.283804   77396 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.283918   77396 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:18:45.284594   77396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.284693   77396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:18:45.299887   77396 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:18:45.300236   77396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:18:45.300266   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:18:45.300274   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:18:45.300308   77396 start.go:340] cluster config:
	{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.300419   77396 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.302883   77396 out.go:177] * Starting "old-k8s-version-131737" primary control-plane node in "old-k8s-version-131737" cluster
	I0828 18:18:41.610368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:44.682293   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:45.304152   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:18:45.304189   77396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:18:45.304208   77396 cache.go:56] Caching tarball of preloaded images
	I0828 18:18:45.304295   77396 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:18:45.304305   77396 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0828 18:18:45.304426   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:18:45.304608   77396 start.go:360] acquireMachinesLock for old-k8s-version-131737: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:18:50.762367   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:53.834404   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:59.914331   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:02.986351   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:09.066375   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:12.138382   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:18.218324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:21.290324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:27.370327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:30.442342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:36.522377   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:39.594396   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:45.674327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:48.746316   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:54.826346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:57.898388   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:03.978342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:07.050322   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:13.130368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:16.202305   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:22.282326   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:25.354374   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:31.434381   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:34.506312   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:40.586353   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:43.658361   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:49.738343   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:52.810329   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:58.890346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:01.962342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:08.042323   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:11.114385   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:14.118406   76435 start.go:364] duration metric: took 4m0.584080771s to acquireMachinesLock for "embed-certs-014980"
	I0828 18:21:14.118470   76435 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:14.118492   76435 fix.go:54] fixHost starting: 
	I0828 18:21:14.118808   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:14.118834   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:14.134434   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0828 18:21:14.134863   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:14.135369   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:21:14.135398   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:14.135717   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:14.135891   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:14.136052   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:21:14.137681   76435 fix.go:112] recreateIfNeeded on embed-certs-014980: state=Stopped err=<nil>
	I0828 18:21:14.137705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	W0828 18:21:14.137861   76435 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:14.139602   76435 out.go:177] * Restarting existing kvm2 VM for "embed-certs-014980" ...
	I0828 18:21:14.116153   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:14.116188   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116549   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:21:14.116581   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116758   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:21:14.118261   75908 machine.go:96] duration metric: took 4m37.42460751s to provisionDockerMachine
	I0828 18:21:14.118302   75908 fix.go:56] duration metric: took 4m37.4457415s for fixHost
	I0828 18:21:14.118309   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 4m37.445770955s
	W0828 18:21:14.118326   75908 start.go:714] error starting host: provision: host is not running
	W0828 18:21:14.118418   75908 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0828 18:21:14.118430   75908 start.go:729] Will try again in 5 seconds ...
	I0828 18:21:14.140812   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Start
	I0828 18:21:14.140967   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring networks are active...
	I0828 18:21:14.141716   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network default is active
	I0828 18:21:14.142021   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network mk-embed-certs-014980 is active
	I0828 18:21:14.142397   76435 main.go:141] libmachine: (embed-certs-014980) Getting domain xml...
	I0828 18:21:14.143109   76435 main.go:141] libmachine: (embed-certs-014980) Creating domain...
	I0828 18:21:15.352062   76435 main.go:141] libmachine: (embed-certs-014980) Waiting to get IP...
	I0828 18:21:15.352991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.353345   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.353418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.353319   77926 retry.go:31] will retry after 289.130703ms: waiting for machine to come up
	I0828 18:21:15.644017   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.644460   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.644482   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.644434   77926 retry.go:31] will retry after 240.747341ms: waiting for machine to come up
	I0828 18:21:15.886897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.887308   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.887340   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.887258   77926 retry.go:31] will retry after 467.167731ms: waiting for machine to come up
	I0828 18:21:16.355790   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.356204   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.356232   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.356160   77926 retry.go:31] will retry after 506.51967ms: waiting for machine to come up
	I0828 18:21:16.863907   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.864309   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.864343   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.864264   77926 retry.go:31] will retry after 458.679357ms: waiting for machine to come up
	I0828 18:21:17.324948   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.325436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.325462   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.325385   77926 retry.go:31] will retry after 604.433375ms: waiting for machine to come up
	I0828 18:21:17.931169   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.931568   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.931614   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.931507   77926 retry.go:31] will retry after 852.10168ms: waiting for machine to come up
	I0828 18:21:19.120844   75908 start.go:360] acquireMachinesLock for no-preload-072854: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:21:18.785312   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:18.785735   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:18.785762   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:18.785682   77926 retry.go:31] will retry after 1.332568679s: waiting for machine to come up
	I0828 18:21:20.119550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:20.119990   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:20.120016   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:20.119947   77926 retry.go:31] will retry after 1.606559109s: waiting for machine to come up
	I0828 18:21:21.727719   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:21.728147   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:21.728175   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:21.728091   77926 retry.go:31] will retry after 1.901370923s: waiting for machine to come up
	I0828 18:21:23.632187   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:23.632554   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:23.632578   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:23.632509   77926 retry.go:31] will retry after 2.387413646s: waiting for machine to come up
	I0828 18:21:26.022576   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:26.022902   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:26.022924   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:26.022862   77926 retry.go:31] will retry after 3.196331032s: waiting for machine to come up
	I0828 18:21:33.374810   76486 start.go:364] duration metric: took 4m17.539072759s to acquireMachinesLock for "default-k8s-diff-port-640552"
	I0828 18:21:33.374877   76486 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:33.374898   76486 fix.go:54] fixHost starting: 
	I0828 18:21:33.375317   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:33.375357   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:33.392734   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0828 18:21:33.393239   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:33.393761   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:21:33.393783   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:33.394131   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:33.394347   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:33.394547   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:21:33.395998   76486 fix.go:112] recreateIfNeeded on default-k8s-diff-port-640552: state=Stopped err=<nil>
	I0828 18:21:33.396038   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	W0828 18:21:33.396210   76486 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:33.398362   76486 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-640552" ...
	I0828 18:21:29.220396   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:29.220861   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:29.220897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:29.220820   77926 retry.go:31] will retry after 2.802196616s: waiting for machine to come up
	I0828 18:21:32.026808   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027298   76435 main.go:141] libmachine: (embed-certs-014980) Found IP for machine: 192.168.72.130
	I0828 18:21:32.027319   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has current primary IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027325   76435 main.go:141] libmachine: (embed-certs-014980) Reserving static IP address...
	I0828 18:21:32.027698   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.027764   76435 main.go:141] libmachine: (embed-certs-014980) DBG | skip adding static IP to network mk-embed-certs-014980 - found existing host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"}
	I0828 18:21:32.027781   76435 main.go:141] libmachine: (embed-certs-014980) Reserved static IP address: 192.168.72.130
	I0828 18:21:32.027800   76435 main.go:141] libmachine: (embed-certs-014980) Waiting for SSH to be available...
	I0828 18:21:32.027814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Getting to WaitForSSH function...
	I0828 18:21:32.029740   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030020   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.030051   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030171   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH client type: external
	I0828 18:21:32.030200   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa (-rw-------)
	I0828 18:21:32.030235   76435 main.go:141] libmachine: (embed-certs-014980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:32.030249   76435 main.go:141] libmachine: (embed-certs-014980) DBG | About to run SSH command:
	I0828 18:21:32.030264   76435 main.go:141] libmachine: (embed-certs-014980) DBG | exit 0
	I0828 18:21:32.153760   76435 main.go:141] libmachine: (embed-certs-014980) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:32.154184   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetConfigRaw
	I0828 18:21:32.154807   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.157116   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157449   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.157472   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157662   76435 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/config.json ...
	I0828 18:21:32.157857   76435 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:32.157873   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:32.158051   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.160224   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160516   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.160550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.160877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.160999   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.161141   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.161310   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.161509   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.161528   76435 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:32.270041   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:32.270070   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270351   76435 buildroot.go:166] provisioning hostname "embed-certs-014980"
	I0828 18:21:32.270375   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270568   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.273124   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273480   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.273509   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273626   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.273774   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.273941   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.274062   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.274264   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.274435   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.274448   76435 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-014980 && echo "embed-certs-014980" | sudo tee /etc/hostname
	I0828 18:21:32.401452   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-014980
	
	I0828 18:21:32.401473   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.404278   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404622   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.404672   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404785   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.405012   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405167   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405312   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.405525   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.405697   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.405714   76435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-014980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-014980/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-014980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:32.523970   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:32.523997   76435 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:32.524044   76435 buildroot.go:174] setting up certificates
	I0828 18:21:32.524054   76435 provision.go:84] configureAuth start
	I0828 18:21:32.524063   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.524374   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.527040   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527391   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.527418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527540   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.529680   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.529986   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.530006   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.530170   76435 provision.go:143] copyHostCerts
	I0828 18:21:32.530220   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:32.530237   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:32.530306   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:32.530387   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:32.530399   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:32.530423   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:32.530475   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:32.530481   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:32.530502   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:32.530556   76435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.embed-certs-014980 san=[127.0.0.1 192.168.72.130 embed-certs-014980 localhost minikube]
	I0828 18:21:32.755911   76435 provision.go:177] copyRemoteCerts
	I0828 18:21:32.755967   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:32.755990   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.758640   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.758944   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.758981   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.759123   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.759306   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.759442   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.759554   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:32.843219   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:32.867929   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0828 18:21:32.890143   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:32.911983   76435 provision.go:87] duration metric: took 387.917809ms to configureAuth
	I0828 18:21:32.912013   76435 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:32.912199   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:32.912281   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.914814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915154   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.915188   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915321   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.915550   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915717   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915899   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.916116   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.916323   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.916378   76435 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:33.137477   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:33.137500   76435 machine.go:96] duration metric: took 979.632081ms to provisionDockerMachine
	I0828 18:21:33.137513   76435 start.go:293] postStartSetup for "embed-certs-014980" (driver="kvm2")
	I0828 18:21:33.137526   76435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:33.137564   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.137847   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:33.137877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.140267   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140555   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.140584   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140731   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.140922   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.141078   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.141223   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.224499   76435 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:33.228643   76435 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:33.228672   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:33.228755   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:33.228855   76435 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:33.229038   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:33.238208   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:33.260348   76435 start.go:296] duration metric: took 122.819807ms for postStartSetup
	I0828 18:21:33.260400   76435 fix.go:56] duration metric: took 19.141917324s for fixHost
	I0828 18:21:33.260424   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.262763   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263139   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.263168   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263289   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.263482   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263659   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263871   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.264050   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:33.264216   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:33.264226   76435 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:33.374640   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869293.352212530
	
	I0828 18:21:33.374664   76435 fix.go:216] guest clock: 1724869293.352212530
	I0828 18:21:33.374687   76435 fix.go:229] Guest: 2024-08-28 18:21:33.35221253 +0000 UTC Remote: 2024-08-28 18:21:33.260405829 +0000 UTC m=+259.867297948 (delta=91.806701ms)
	I0828 18:21:33.374708   76435 fix.go:200] guest clock delta is within tolerance: 91.806701ms
	I0828 18:21:33.374713   76435 start.go:83] releasing machines lock for "embed-certs-014980", held for 19.256266619s
	I0828 18:21:33.374735   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.374991   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:33.377975   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378411   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.378436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378623   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379150   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379317   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379409   76435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:33.379465   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.379568   76435 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:33.379594   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.381991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382015   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382323   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382354   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382379   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382438   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382493   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382687   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382876   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382907   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383029   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383033   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.383145   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.508142   76435 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:33.514436   76435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:33.661055   76435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:33.666718   76435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:33.666774   76435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:33.683142   76435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:33.683169   76435 start.go:495] detecting cgroup driver to use...
	I0828 18:21:33.683253   76435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:33.698356   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:33.711626   76435 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:33.711690   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:33.724743   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:33.738782   76435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:33.852946   76435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:33.990370   76435 docker.go:233] disabling docker service ...
	I0828 18:21:33.990440   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:34.004746   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:34.017220   76435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:34.174534   76435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:34.320863   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:34.333880   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:34.351859   76435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:34.351907   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.362142   76435 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:34.362223   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.372261   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.382374   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.396994   76435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:34.412126   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.422585   76435 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.439314   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.449667   76435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:34.458389   76435 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:34.458449   76435 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:34.471501   76435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:34.480915   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:34.617633   76435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:34.731432   76435 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:34.731508   76435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:34.736417   76435 start.go:563] Will wait 60s for crictl version
	I0828 18:21:34.736464   76435 ssh_runner.go:195] Run: which crictl
	I0828 18:21:34.740213   76435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:34.776804   76435 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:34.776908   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.806826   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.837961   76435 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:33.399527   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Start
	I0828 18:21:33.399696   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring networks are active...
	I0828 18:21:33.400382   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network default is active
	I0828 18:21:33.400737   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network mk-default-k8s-diff-port-640552 is active
	I0828 18:21:33.401099   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Getting domain xml...
	I0828 18:21:33.401809   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Creating domain...
	I0828 18:21:34.684850   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting to get IP...
	I0828 18:21:34.685612   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.685998   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.686063   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.685980   78067 retry.go:31] will retry after 291.65765ms: waiting for machine to come up
	I0828 18:21:34.979550   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980029   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980051   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.979993   78067 retry.go:31] will retry after 274.75755ms: waiting for machine to come up
	I0828 18:21:35.256257   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256724   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256752   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.256666   78067 retry.go:31] will retry after 455.404257ms: waiting for machine to come up
	I0828 18:21:35.714147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714683   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714716   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.714635   78067 retry.go:31] will retry after 426.56406ms: waiting for machine to come up
	I0828 18:21:34.839157   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:34.842000   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842417   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:34.842443   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842650   76435 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:34.846628   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:34.859098   76435 kubeadm.go:883] updating cluster {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:34.859212   76435 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:34.859259   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:34.898150   76435 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:34.898233   76435 ssh_runner.go:195] Run: which lz4
	I0828 18:21:34.902220   76435 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:34.906463   76435 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:34.906498   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:36.168426   76435 crio.go:462] duration metric: took 1.26624881s to copy over tarball
	I0828 18:21:36.168514   76435 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:38.266205   76435 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.097659696s)
	I0828 18:21:38.266252   76435 crio.go:469] duration metric: took 2.097775234s to extract the tarball
	I0828 18:21:38.266264   76435 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:38.302870   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:38.349495   76435 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:38.349527   76435 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:38.349538   76435 kubeadm.go:934] updating node { 192.168.72.130 8443 v1.31.0 crio true true} ...
	I0828 18:21:38.349672   76435 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-014980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:38.349761   76435 ssh_runner.go:195] Run: crio config
	I0828 18:21:38.393310   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:38.393333   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:38.393346   76435 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:38.393367   76435 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-014980 NodeName:embed-certs-014980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:38.393502   76435 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-014980"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:38.393561   76435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:38.403059   76435 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:38.403128   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:38.411944   76435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0828 18:21:38.427006   76435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:36.143403   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143961   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.143901   78067 retry.go:31] will retry after 623.404625ms: waiting for machine to come up
	I0828 18:21:36.768738   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769339   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.769256   78067 retry.go:31] will retry after 750.082443ms: waiting for machine to come up
	I0828 18:21:37.521122   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521604   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521633   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:37.521562   78067 retry.go:31] will retry after 837.989492ms: waiting for machine to come up
	I0828 18:21:38.361659   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362111   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362140   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:38.362056   78067 retry.go:31] will retry after 1.13122193s: waiting for machine to come up
	I0828 18:21:39.495248   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495643   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495673   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:39.495578   78067 retry.go:31] will retry after 1.180862235s: waiting for machine to come up
	I0828 18:21:40.677748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678090   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678117   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:40.678045   78067 retry.go:31] will retry after 2.245023454s: waiting for machine to come up
	I0828 18:21:38.441960   76435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0828 18:21:38.457509   76435 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:38.461003   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:38.472232   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:38.591387   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:38.606911   76435 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980 for IP: 192.168.72.130
	I0828 18:21:38.606935   76435 certs.go:194] generating shared ca certs ...
	I0828 18:21:38.606957   76435 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:38.607122   76435 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:38.607186   76435 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:38.607199   76435 certs.go:256] generating profile certs ...
	I0828 18:21:38.607304   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/client.key
	I0828 18:21:38.607398   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key.f4b1f9f1
	I0828 18:21:38.607449   76435 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key
	I0828 18:21:38.607595   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:38.607634   76435 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:38.607646   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:38.607679   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:38.607726   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:38.607756   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:38.607808   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:38.608698   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:38.647796   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:38.685835   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:38.738515   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:38.769248   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0828 18:21:38.795091   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:38.816857   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:38.839153   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:38.861024   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:38.882488   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:38.905023   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:38.927997   76435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:38.945870   76435 ssh_runner.go:195] Run: openssl version
	I0828 18:21:38.951753   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:38.962635   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966847   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966895   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.972529   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:38.982689   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:38.992812   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996942   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996991   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:39.002359   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:39.012423   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:39.022765   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.026945   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.027007   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.032233   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:39.042709   76435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:39.046904   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:39.052563   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:39.057937   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:39.063465   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:39.068788   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:39.074233   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:39.079673   76435 kubeadm.go:392] StartCluster: {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:39.079776   76435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:39.079824   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.120250   76435 cri.go:89] found id: ""
	I0828 18:21:39.120331   76435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:39.130147   76435 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:39.130171   76435 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:39.130223   76435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:39.139586   76435 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:39.140642   76435 kubeconfig.go:125] found "embed-certs-014980" server: "https://192.168.72.130:8443"
	I0828 18:21:39.142695   76435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:39.152102   76435 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I0828 18:21:39.152136   76435 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:39.152149   76435 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:39.152225   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.189811   76435 cri.go:89] found id: ""
	I0828 18:21:39.189899   76435 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:39.205579   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:39.215378   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:39.215401   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:39.215451   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:21:39.225068   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:39.225136   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:39.234254   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:21:39.243009   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:39.243072   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:39.252251   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.261241   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:39.261314   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.270443   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:21:39.278999   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:39.279070   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:39.288033   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:39.297331   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:39.396232   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.225819   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.420586   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.482893   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.601563   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:40.601672   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.101955   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.602572   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.102180   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.602520   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.635705   76435 api_server.go:72] duration metric: took 2.034151361s to wait for apiserver process to appear ...
	I0828 18:21:42.635738   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:21:42.635762   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.636263   76435 api_server.go:269] stopped: https://192.168.72.130:8443/healthz: Get "https://192.168.72.130:8443/healthz": dial tcp 192.168.72.130:8443: connect: connection refused
	I0828 18:21:43.136019   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.925748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926265   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926293   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:42.926217   78067 retry.go:31] will retry after 2.565646238s: waiting for machine to come up
	I0828 18:21:45.494477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495032   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495058   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:45.494982   78067 retry.go:31] will retry after 2.418376782s: waiting for machine to come up
	I0828 18:21:45.980398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:45.980429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:45.980444   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.010352   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:46.010385   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:46.136576   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.141398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.141429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:46.635898   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.641672   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.641712   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.136295   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.142623   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:47.142657   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.636199   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.640325   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:21:47.647198   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:21:47.647226   76435 api_server.go:131] duration metric: took 5.011481159s to wait for apiserver health ...
	I0828 18:21:47.647236   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:47.647245   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:47.649638   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:21:47.650998   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:21:47.662361   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:21:47.683446   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:21:47.696066   76435 system_pods.go:59] 8 kube-system pods found
	I0828 18:21:47.696100   76435 system_pods.go:61] "coredns-6f6b679f8f-4g2n8" [9c34e013-4c11-4805-9d58-987bb130f1b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:21:47.696120   76435 system_pods.go:61] "etcd-embed-certs-014980" [164f2ce3-8df6-4e56-a959-80b08848a181] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:21:47.696133   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [c637e3e0-4e54-44b1-8eb7-ea11d3b5753a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:21:47.696143   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [2d786cc0-a0c7-430c-89e1-9889e919289d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:21:47.696149   76435 system_pods.go:61] "kube-proxy-4lz5q" [a5f2213b-6b36-4656-8a26-26903bc09441] Running
	I0828 18:21:47.696158   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [2aa3787a-7a70-4cfb-8810-9f4e02240012] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:21:47.696167   76435 system_pods.go:61] "metrics-server-6867b74b74-f56j2" [91d30fa3-cc63-4d61-8cd3-46ecc950c31f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:21:47.696176   76435 system_pods.go:61] "storage-provisioner" [54d357f5-8f8a-429b-81db-40c9f2857fde] Running
	I0828 18:21:47.696185   76435 system_pods.go:74] duration metric: took 12.718326ms to wait for pod list to return data ...
	I0828 18:21:47.696198   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:21:47.699492   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:21:47.699515   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:21:47.699528   76435 node_conditions.go:105] duration metric: took 3.324668ms to run NodePressure ...
	I0828 18:21:47.699548   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:47.970122   76435 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973854   76435 kubeadm.go:739] kubelet initialised
	I0828 18:21:47.973874   76435 kubeadm.go:740] duration metric: took 3.724056ms waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973881   76435 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:21:47.978377   76435 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:21:47.916599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.916976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.917015   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:47.916941   78067 retry.go:31] will retry after 3.1564792s: waiting for machine to come up
	I0828 18:21:52.286982   77396 start.go:364] duration metric: took 3m6.98234152s to acquireMachinesLock for "old-k8s-version-131737"
	I0828 18:21:52.287057   77396 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:52.287069   77396 fix.go:54] fixHost starting: 
	I0828 18:21:52.287554   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:52.287595   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:52.305954   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0828 18:21:52.306439   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:52.306908   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:21:52.306928   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:52.307228   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:52.307404   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:21:52.307571   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetState
	I0828 18:21:52.309284   77396 fix.go:112] recreateIfNeeded on old-k8s-version-131737: state=Stopped err=<nil>
	I0828 18:21:52.309322   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	W0828 18:21:52.309508   77396 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:52.311369   77396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-131737" ...
	I0828 18:21:49.984379   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.985536   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.075186   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.075681   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Found IP for machine: 192.168.39.226
	I0828 18:21:51.075698   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserving static IP address...
	I0828 18:21:51.075746   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has current primary IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.076159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.076184   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | skip adding static IP to network mk-default-k8s-diff-port-640552 - found existing host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"}
	I0828 18:21:51.076201   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserved static IP address: 192.168.39.226
	I0828 18:21:51.076218   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for SSH to be available...
	I0828 18:21:51.076230   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Getting to WaitForSSH function...
	I0828 18:21:51.078435   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078745   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.078766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078967   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH client type: external
	I0828 18:21:51.079000   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa (-rw-------)
	I0828 18:21:51.079053   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:51.079079   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | About to run SSH command:
	I0828 18:21:51.079114   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | exit 0
	I0828 18:21:51.205844   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:51.206145   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetConfigRaw
	I0828 18:21:51.206821   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.209159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.209563   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209753   76486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/config.json ...
	I0828 18:21:51.209980   76486 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:51.209999   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:51.210244   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.212321   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212651   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.212677   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212800   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.212971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213273   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.213408   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.213639   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.213650   76486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:51.330211   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:51.330249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330530   76486 buildroot.go:166] provisioning hostname "default-k8s-diff-port-640552"
	I0828 18:21:51.330558   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330820   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.333492   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.333855   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.333885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.334027   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.334249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334469   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334658   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.334844   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.335003   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.335015   76486 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-640552 && echo "default-k8s-diff-port-640552" | sudo tee /etc/hostname
	I0828 18:21:51.459660   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-640552
	
	I0828 18:21:51.459690   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.462286   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462636   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.462668   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462842   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.463034   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463181   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463307   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.463470   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.463650   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.463682   76486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-640552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-640552/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-640552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:51.581714   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:51.581740   76486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:51.581777   76486 buildroot.go:174] setting up certificates
	I0828 18:21:51.581792   76486 provision.go:84] configureAuth start
	I0828 18:21:51.581807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.582130   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.584626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.584945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.584976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.585073   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.587285   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587672   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.587700   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587868   76486 provision.go:143] copyHostCerts
	I0828 18:21:51.587926   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:51.587946   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:51.588003   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:51.588092   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:51.588100   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:51.588124   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:51.588244   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:51.588255   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:51.588277   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:51.588332   76486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-640552 san=[127.0.0.1 192.168.39.226 default-k8s-diff-port-640552 localhost minikube]
	I0828 18:21:51.657408   76486 provision.go:177] copyRemoteCerts
	I0828 18:21:51.657457   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:51.657480   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.660152   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660494   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.660514   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660709   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.660911   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.661078   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.661251   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:51.751729   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:51.773473   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0828 18:21:51.796174   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:51.817640   76486 provision.go:87] duration metric: took 235.828003ms to configureAuth
	I0828 18:21:51.817672   76486 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:51.817892   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:51.817983   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.820433   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.820780   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.820807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.821016   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.821214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821371   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821533   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.821684   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.821852   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.821870   76486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:52.048026   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:52.048055   76486 machine.go:96] duration metric: took 838.061836ms to provisionDockerMachine
	I0828 18:21:52.048067   76486 start.go:293] postStartSetup for "default-k8s-diff-port-640552" (driver="kvm2")
	I0828 18:21:52.048078   76486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:52.048097   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.048437   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:52.048472   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.051115   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051385   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.051410   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051597   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.051815   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.051971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.052066   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.136350   76486 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:52.140200   76486 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:52.140228   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:52.140303   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:52.140397   76486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:52.140496   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:52.149451   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:52.172381   76486 start.go:296] duration metric: took 124.302384ms for postStartSetup
	I0828 18:21:52.172450   76486 fix.go:56] duration metric: took 18.797536411s for fixHost
	I0828 18:21:52.172477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.174891   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175255   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.175274   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175474   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.175631   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175788   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.176100   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:52.176279   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:52.176289   76486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:52.286801   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869312.259614140
	
	I0828 18:21:52.286827   76486 fix.go:216] guest clock: 1724869312.259614140
	I0828 18:21:52.286835   76486 fix.go:229] Guest: 2024-08-28 18:21:52.25961414 +0000 UTC Remote: 2024-08-28 18:21:52.172457684 +0000 UTC m=+276.471609311 (delta=87.156456ms)
	I0828 18:21:52.286854   76486 fix.go:200] guest clock delta is within tolerance: 87.156456ms
	I0828 18:21:52.286859   76486 start.go:83] releasing machines lock for "default-k8s-diff-port-640552", held for 18.912007294s
	I0828 18:21:52.286884   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.287148   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:52.289951   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290346   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.290370   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290500   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.290976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291228   76486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:52.291282   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.291325   76486 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:52.291344   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.294010   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294039   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294464   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294490   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294637   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294685   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294896   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295185   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295331   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295326   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.295560   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.380284   76486 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:52.421868   76486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:52.563478   76486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:52.569318   76486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:52.569408   76486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:52.585683   76486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:52.585709   76486 start.go:495] detecting cgroup driver to use...
	I0828 18:21:52.585781   76486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:52.603511   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:52.616868   76486 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:52.616930   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:52.631574   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:52.644913   76486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:52.762863   76486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:52.920107   76486 docker.go:233] disabling docker service ...
	I0828 18:21:52.920183   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:52.937155   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:52.951124   76486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:53.063496   76486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:53.187655   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:53.201452   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:53.219663   76486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:53.219734   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.230165   76486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:53.230247   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.240480   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.251258   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.262763   76486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:53.273597   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.283571   76486 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.302935   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.313508   76486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:53.322781   76486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:53.322850   76486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:53.337049   76486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:53.347349   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:53.455027   76486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:53.551547   76486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:53.551607   76486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:53.556960   76486 start.go:563] Will wait 60s for crictl version
	I0828 18:21:53.557066   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:21:53.560695   76486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:53.603636   76486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:53.603721   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.632017   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.664760   76486 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:52.312648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .Start
	I0828 18:21:52.312862   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring networks are active...
	I0828 18:21:52.313682   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network default is active
	I0828 18:21:52.314112   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network mk-old-k8s-version-131737 is active
	I0828 18:21:52.314488   77396 main.go:141] libmachine: (old-k8s-version-131737) Getting domain xml...
	I0828 18:21:52.315180   77396 main.go:141] libmachine: (old-k8s-version-131737) Creating domain...
	I0828 18:21:53.582013   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting to get IP...
	I0828 18:21:53.583124   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.583609   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.583672   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.583582   78246 retry.go:31] will retry after 289.679773ms: waiting for machine to come up
	I0828 18:21:53.875299   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.876115   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.876144   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.876051   78246 retry.go:31] will retry after 263.317798ms: waiting for machine to come up
	I0828 18:21:54.141733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.142310   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.142340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.142257   78246 retry.go:31] will retry after 440.224905ms: waiting for machine to come up
	I0828 18:21:54.584505   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.585061   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.585084   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.585018   78246 retry.go:31] will retry after 379.546405ms: waiting for machine to come up
	I0828 18:21:54.966516   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.967130   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.967153   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.967045   78246 retry.go:31] will retry after 754.463377ms: waiting for machine to come up
	I0828 18:21:53.665810   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:53.668882   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669330   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:53.669352   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669589   76486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:53.673693   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:53.685432   76486 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:53.685546   76486 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:53.685593   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:53.720069   76486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:53.720129   76486 ssh_runner.go:195] Run: which lz4
	I0828 18:21:53.723841   76486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:53.727666   76486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:53.727697   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:54.993725   76486 crio.go:462] duration metric: took 1.269921848s to copy over tarball
	I0828 18:21:54.993802   76486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:53.987677   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:56.485568   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:55.723533   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:55.724021   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:55.724042   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:55.723980   78246 retry.go:31] will retry after 607.743145ms: waiting for machine to come up
	I0828 18:21:56.333733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:56.334181   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:56.334210   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:56.334135   78246 retry.go:31] will retry after 1.098394488s: waiting for machine to come up
	I0828 18:21:57.433729   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:57.434212   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:57.434243   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:57.434157   78246 retry.go:31] will retry after 1.195993343s: waiting for machine to come up
	I0828 18:21:58.631451   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:58.631839   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:58.631867   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:58.631798   78246 retry.go:31] will retry after 1.807712472s: waiting for machine to come up
	I0828 18:21:57.135009   76486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.141177811s)
	I0828 18:21:57.135041   76486 crio.go:469] duration metric: took 2.141292479s to extract the tarball
	I0828 18:21:57.135051   76486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:57.172381   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:57.211971   76486 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:57.211993   76486 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:57.212003   76486 kubeadm.go:934] updating node { 192.168.39.226 8444 v1.31.0 crio true true} ...
	I0828 18:21:57.212123   76486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-640552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:57.212202   76486 ssh_runner.go:195] Run: crio config
	I0828 18:21:57.254347   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:21:57.254378   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:57.254402   76486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:57.254431   76486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-640552 NodeName:default-k8s-diff-port-640552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:57.254630   76486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-640552"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:57.254715   76486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:57.264233   76486 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:57.264304   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:57.273293   76486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0828 18:21:57.289211   76486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:57.304829   76486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0828 18:21:57.323062   76486 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:57.326891   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:57.339775   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:57.463802   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:57.479266   76486 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552 for IP: 192.168.39.226
	I0828 18:21:57.479288   76486 certs.go:194] generating shared ca certs ...
	I0828 18:21:57.479325   76486 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:57.479519   76486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:57.479570   76486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:57.479584   76486 certs.go:256] generating profile certs ...
	I0828 18:21:57.479702   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/client.key
	I0828 18:21:57.479774   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key.90f46fd7
	I0828 18:21:57.479829   76486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key
	I0828 18:21:57.479977   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:57.480018   76486 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:57.480031   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:57.480071   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:57.480109   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:57.480142   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:57.480199   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:57.481063   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:57.514802   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:57.555506   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:57.585381   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:57.613009   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0828 18:21:57.637776   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:57.662590   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:57.684482   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:57.707287   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:57.728392   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:57.750217   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:57.771310   76486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:57.786814   76486 ssh_runner.go:195] Run: openssl version
	I0828 18:21:57.792053   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:57.802301   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806552   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806627   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.812238   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:57.824231   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:57.834783   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.838954   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.839008   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.844456   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:57.856262   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:57.867737   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872040   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872122   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.877506   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:57.889018   76486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:57.893303   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:57.899199   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:57.907716   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:57.915801   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:57.923795   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:57.929601   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:57.935563   76486 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:57.935655   76486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:57.935698   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:57.975236   76486 cri.go:89] found id: ""
	I0828 18:21:57.975308   76486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:57.986945   76486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:57.986966   76486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:57.987014   76486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:57.996355   76486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:57.997293   76486 kubeconfig.go:125] found "default-k8s-diff-port-640552" server: "https://192.168.39.226:8444"
	I0828 18:21:57.999257   76486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:58.008531   76486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.226
	I0828 18:21:58.008555   76486 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:58.008564   76486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:58.008612   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:58.054603   76486 cri.go:89] found id: ""
	I0828 18:21:58.054681   76486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:58.072017   76486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:58.085982   76486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:58.086007   76486 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:58.086087   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0828 18:21:58.094721   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:58.094790   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:58.108457   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0828 18:21:58.120495   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:58.120568   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:58.130432   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.139428   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:58.139495   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.148537   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0828 18:21:58.157182   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:58.157241   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:58.166178   76486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:58.175176   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:58.276043   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.072360   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.270937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.344719   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.442568   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:59.442664   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:59.942860   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:00.443271   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:58.485632   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:00.694313   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:00.694341   76435 pod_ready.go:82] duration metric: took 12.71594065s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.694354   76435 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210752   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.210805   76435 pod_ready.go:82] duration metric: took 516.442507ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210821   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218781   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.218809   76435 pod_ready.go:82] duration metric: took 7.979295ms for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218823   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725883   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.725914   76435 pod_ready.go:82] duration metric: took 507.08194ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725932   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731866   76435 pod_ready.go:93] pod "kube-proxy-4lz5q" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.731891   76435 pod_ready.go:82] duration metric: took 5.951323ms for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731903   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737160   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.737191   76435 pod_ready.go:82] duration metric: took 5.279341ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737203   76435 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.441679   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:00.442149   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:00.442178   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:00.442063   78246 retry.go:31] will retry after 2.175897132s: waiting for machine to come up
	I0828 18:22:02.620076   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:02.620562   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:02.620589   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:02.620527   78246 retry.go:31] will retry after 1.749248103s: waiting for machine to come up
	I0828 18:22:04.371390   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:04.371924   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:04.371969   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:04.371875   78246 retry.go:31] will retry after 2.412168623s: waiting for machine to come up
	I0828 18:22:00.943566   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.443708   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.943361   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.957227   76486 api_server.go:72] duration metric: took 2.514666609s to wait for apiserver process to appear ...
	I0828 18:22:01.957258   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:01.957281   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.174923   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.174955   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.174970   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.227506   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.227540   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.457869   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.463842   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.463884   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:04.957398   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.964576   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.964606   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:05.457724   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:05.461808   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:22:05.467732   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:05.467757   76486 api_server.go:131] duration metric: took 3.510492089s to wait for apiserver health ...
	I0828 18:22:05.467766   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:22:05.467771   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:05.469553   76486 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:05.470759   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:05.481858   76486 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:05.500789   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:05.512306   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:05.512336   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:05.512343   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:05.512353   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:05.512360   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:05.512368   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:05.512379   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:05.512386   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:05.512396   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:05.512405   76486 system_pods.go:74] duration metric: took 11.592471ms to wait for pod list to return data ...
	I0828 18:22:05.512419   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:05.516136   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:05.516167   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:05.516182   76486 node_conditions.go:105] duration metric: took 3.757746ms to run NodePressure ...
	I0828 18:22:05.516205   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:05.793448   76486 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798810   76486 kubeadm.go:739] kubelet initialised
	I0828 18:22:05.798827   76486 kubeadm.go:740] duration metric: took 5.35696ms waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798835   76486 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:05.803644   76486 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.808185   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808206   76486 pod_ready.go:82] duration metric: took 4.541551ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.808214   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808226   76486 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.812918   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812941   76486 pod_ready.go:82] duration metric: took 4.703246ms for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.812950   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812956   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.817019   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817036   76486 pod_ready.go:82] duration metric: took 4.075009ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.817045   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817050   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.904575   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904606   76486 pod_ready.go:82] duration metric: took 87.547744ms for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.904621   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904628   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.304141   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304168   76486 pod_ready.go:82] duration metric: took 399.53302ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.304177   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304182   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.704632   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704663   76486 pod_ready.go:82] duration metric: took 400.470144ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.704677   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704686   76486 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:07.104218   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104247   76486 pod_ready.go:82] duration metric: took 399.550393ms for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:07.104261   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104270   76486 pod_ready.go:39] duration metric: took 1.305425633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:07.104296   76486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:07.115055   76486 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:07.115077   76486 kubeadm.go:597] duration metric: took 9.128104649s to restartPrimaryControlPlane
	I0828 18:22:07.115085   76486 kubeadm.go:394] duration metric: took 9.179528813s to StartCluster
	I0828 18:22:07.115105   76486 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.115169   76486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:07.116738   76486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.116962   76486 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:07.117026   76486 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:07.117104   76486 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117121   76486 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117136   76486 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117150   76486 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:07.117175   76486 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-640552"
	I0828 18:22:07.117185   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117191   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:07.117166   76486 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117280   76486 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117291   76486 addons.go:243] addon metrics-server should already be in state true
	I0828 18:22:07.117316   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117551   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117585   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117607   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117622   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117666   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117687   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.118665   76486 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:07.119962   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0828 18:22:07.133468   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133474   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133473   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0828 18:22:07.133904   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.134022   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134039   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134044   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134055   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134378   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134405   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134416   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134425   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134582   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.134742   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134990   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135019   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.135331   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135358   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.142488   76486 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.142508   76486 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:07.142534   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.142790   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.142845   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.151553   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0828 18:22:07.152067   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.152561   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.152578   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.152988   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.153172   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.153267   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0828 18:22:07.153647   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.154071   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.154118   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.154424   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.154657   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.155656   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.156384   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.158167   76486 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:07.158170   76486 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:03.743115   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:06.246448   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:07.159313   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0828 18:22:07.159655   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.159730   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:07.159748   76486 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:07.159766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.159877   76486 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.159893   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:07.159908   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.160069   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.160087   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.160501   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.160999   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.161042   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.163522   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163560   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163954   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163960   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163980   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163989   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.164249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164451   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164455   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164575   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164746   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.164806   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.177679   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0828 18:22:07.178179   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.178711   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.178732   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.179027   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.179214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.180671   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.180897   76486 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.180912   76486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:07.180931   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.183194   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183530   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.183619   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183784   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.183935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.184064   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.184197   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.320359   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:07.338447   76486 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:07.422788   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.478274   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:07.478295   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:07.481718   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.539263   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:07.539287   76486 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:07.610393   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:07.610415   76486 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:07.671875   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:08.436371   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436397   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436468   76486 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.013643707s)
	I0828 18:22:08.436507   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436690   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436708   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436720   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436728   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436823   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.436836   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436848   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436857   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436866   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436939   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436952   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.437124   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.437174   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.437198   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.442852   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.442871   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.443116   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.443135   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601340   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601386   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601681   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.601728   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601743   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601753   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601998   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.602020   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.602030   76486 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-640552"
	I0828 18:22:08.603833   76486 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:06.787073   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:06.787468   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:06.787506   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:06.787418   78246 retry.go:31] will retry after 3.844761666s: waiting for machine to come up
	I0828 18:22:08.605028   76486 addons.go:510] duration metric: took 1.488006928s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:09.342263   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:11.990693   75908 start.go:364] duration metric: took 52.869802321s to acquireMachinesLock for "no-preload-072854"
	I0828 18:22:11.990749   75908 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:22:11.990756   75908 fix.go:54] fixHost starting: 
	I0828 18:22:11.991173   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:11.991211   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:12.008247   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0828 18:22:12.008729   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:12.009170   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:12.009193   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:12.009534   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:12.009732   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:12.009873   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:12.011416   75908 fix.go:112] recreateIfNeeded on no-preload-072854: state=Stopped err=<nil>
	I0828 18:22:12.011442   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	W0828 18:22:12.011599   75908 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:22:12.013401   75908 out.go:177] * Restarting existing kvm2 VM for "no-preload-072854" ...
	I0828 18:22:08.747994   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:11.243666   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:13.245991   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:10.635599   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.635992   77396 main.go:141] libmachine: (old-k8s-version-131737) Found IP for machine: 192.168.50.99
	I0828 18:22:10.636017   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserving static IP address...
	I0828 18:22:10.636035   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has current primary IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.636476   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserved static IP address: 192.168.50.99
	I0828 18:22:10.636507   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting for SSH to be available...
	I0828 18:22:10.636529   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.636550   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | skip adding static IP to network mk-old-k8s-version-131737 - found existing host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"}
	I0828 18:22:10.636565   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Getting to WaitForSSH function...
	I0828 18:22:10.638762   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639118   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.639150   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639274   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH client type: external
	I0828 18:22:10.639295   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa (-rw-------)
	I0828 18:22:10.639324   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:10.639340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | About to run SSH command:
	I0828 18:22:10.639368   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | exit 0
	I0828 18:22:10.765932   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:10.766339   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetConfigRaw
	I0828 18:22:10.767003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:10.769525   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770006   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.770045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770184   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:22:10.770396   77396 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:10.770418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:10.770671   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.772685   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773010   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.773031   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773182   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.773396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773583   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773739   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.773904   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.774112   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.774125   77396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:10.874115   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:10.874150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874366   77396 buildroot.go:166] provisioning hostname "old-k8s-version-131737"
	I0828 18:22:10.874396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874600   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.876804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877106   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.877132   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877237   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.877445   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877604   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877763   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.877921   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.878123   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.878139   77396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-131737 && echo "old-k8s-version-131737" | sudo tee /etc/hostname
	I0828 18:22:10.999107   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-131737
	
	I0828 18:22:10.999144   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.002327   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.002771   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.002802   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.003036   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.003221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003425   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003610   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.003769   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.003968   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.003986   77396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-131737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-131737/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-131737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:11.119461   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:11.119493   77396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:11.119523   77396 buildroot.go:174] setting up certificates
	I0828 18:22:11.119535   77396 provision.go:84] configureAuth start
	I0828 18:22:11.119547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:11.119813   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.122564   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.122916   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.122945   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.123121   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.125575   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.125946   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.125973   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.126103   77396 provision.go:143] copyHostCerts
	I0828 18:22:11.126169   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:11.126192   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:11.126258   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:11.126390   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:11.126416   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:11.126453   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:11.126551   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:11.126565   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:11.126596   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:11.126678   77396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-131737 san=[127.0.0.1 192.168.50.99 localhost minikube old-k8s-version-131737]
	I0828 18:22:11.382096   77396 provision.go:177] copyRemoteCerts
	I0828 18:22:11.382161   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:11.382189   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.384698   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.385071   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.385394   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.385527   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.385669   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.463818   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:11.487677   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0828 18:22:11.510454   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 18:22:11.532302   77396 provision.go:87] duration metric: took 412.75597ms to configureAuth
	I0828 18:22:11.532331   77396 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:11.532551   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:22:11.532627   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.535284   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535668   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.535700   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535816   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.536003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536138   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536317   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.536444   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.536599   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.536626   77396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:11.757267   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:11.757297   77396 machine.go:96] duration metric: took 986.887935ms to provisionDockerMachine
	I0828 18:22:11.757311   77396 start.go:293] postStartSetup for "old-k8s-version-131737" (driver="kvm2")
	I0828 18:22:11.757325   77396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:11.757341   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.757701   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:11.757761   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.760433   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760764   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.760804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760949   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.761117   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.761288   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.761467   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.842091   77396 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:11.846271   77396 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:11.846294   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:11.846357   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:11.846452   77396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:11.846590   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:11.856373   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:11.879153   77396 start.go:296] duration metric: took 121.830018ms for postStartSetup
	I0828 18:22:11.879193   77396 fix.go:56] duration metric: took 19.592124568s for fixHost
	I0828 18:22:11.879218   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.882110   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882588   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.882638   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882814   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.883017   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883241   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883383   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.883540   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.883704   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.883715   77396 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:11.990532   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869331.947970723
	
	I0828 18:22:11.990563   77396 fix.go:216] guest clock: 1724869331.947970723
	I0828 18:22:11.990574   77396 fix.go:229] Guest: 2024-08-28 18:22:11.947970723 +0000 UTC Remote: 2024-08-28 18:22:11.879198847 +0000 UTC m=+206.714077766 (delta=68.771876ms)
	I0828 18:22:11.990599   77396 fix.go:200] guest clock delta is within tolerance: 68.771876ms
	I0828 18:22:11.990605   77396 start.go:83] releasing machines lock for "old-k8s-version-131737", held for 19.703582254s
	I0828 18:22:11.990648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.990935   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.993283   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993690   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.993725   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993908   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994630   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994718   77396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:11.994768   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.994836   77396 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:11.994864   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.997521   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997693   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997952   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.997974   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998001   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.998022   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998251   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998384   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998466   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998650   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998665   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.998813   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:12.079201   77396 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:12.116862   77396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:12.268437   77396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:12.274689   77396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:12.274768   77396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:12.299532   77396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:12.299561   77396 start.go:495] detecting cgroup driver to use...
	I0828 18:22:12.299633   77396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:12.321322   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:12.336273   77396 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:12.336345   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:12.350625   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:12.364155   77396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:12.475639   77396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:12.636052   77396 docker.go:233] disabling docker service ...
	I0828 18:22:12.636144   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:12.655431   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:12.673744   77396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:12.865232   77396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:12.993530   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:13.006666   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:13.023529   77396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0828 18:22:13.023617   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.032944   77396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:13.033014   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.042494   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.052172   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.062869   77396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:13.073254   77396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:13.081968   77396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:13.082032   77396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:13.096163   77396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:13.106942   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:13.229752   77396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:13.333809   77396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:13.333870   77396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:13.339539   77396 start.go:563] Will wait 60s for crictl version
	I0828 18:22:13.339615   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:13.343618   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:13.387552   77396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:13.387647   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.417440   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.451222   77396 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0828 18:22:13.452432   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:13.455750   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456127   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:13.456158   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456465   77396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:13.460719   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:13.474168   77396 kubeadm.go:883] updating cluster {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:13.474315   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:22:13.474381   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:13.519869   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:13.519940   77396 ssh_runner.go:195] Run: which lz4
	I0828 18:22:13.524479   77396 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:22:13.528475   77396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:22:13.528511   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0828 18:22:15.039582   77396 crio.go:462] duration metric: took 1.515144029s to copy over tarball
	I0828 18:22:15.039666   77396 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:22:11.342592   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:13.343159   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:14.844412   76486 node_ready.go:49] node "default-k8s-diff-port-640552" has status "Ready":"True"
	I0828 18:22:14.844443   76486 node_ready.go:38] duration metric: took 7.505958149s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:14.844457   76486 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:14.852970   76486 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858426   76486 pod_ready.go:93] pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:14.858454   76486 pod_ready.go:82] duration metric: took 5.455024ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858467   76486 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:12.014690   75908 main.go:141] libmachine: (no-preload-072854) Calling .Start
	I0828 18:22:12.014870   75908 main.go:141] libmachine: (no-preload-072854) Ensuring networks are active...
	I0828 18:22:12.015716   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network default is active
	I0828 18:22:12.016229   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network mk-no-preload-072854 is active
	I0828 18:22:12.016663   75908 main.go:141] libmachine: (no-preload-072854) Getting domain xml...
	I0828 18:22:12.017534   75908 main.go:141] libmachine: (no-preload-072854) Creating domain...
	I0828 18:22:13.381018   75908 main.go:141] libmachine: (no-preload-072854) Waiting to get IP...
	I0828 18:22:13.381905   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.382463   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.382515   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.382439   78447 retry.go:31] will retry after 308.332294ms: waiting for machine to come up
	I0828 18:22:13.692047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.692496   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.692537   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.692434   78447 retry.go:31] will retry after 374.325088ms: waiting for machine to come up
	I0828 18:22:14.068154   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.068770   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.068799   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.068736   78447 retry.go:31] will retry after 465.939187ms: waiting for machine to come up
	I0828 18:22:14.536497   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.537032   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.537055   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.536989   78447 retry.go:31] will retry after 374.795357ms: waiting for machine to come up
	I0828 18:22:14.913413   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.914015   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.914047   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.913964   78447 retry.go:31] will retry after 726.118647ms: waiting for machine to come up
	I0828 18:22:15.641971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:15.642532   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:15.642559   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:15.642483   78447 retry.go:31] will retry after 951.90632ms: waiting for machine to come up
	I0828 18:22:15.745367   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.244292   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.094470   77396 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.054779864s)
	I0828 18:22:18.094500   77396 crio.go:469] duration metric: took 3.054883651s to extract the tarball
	I0828 18:22:18.094507   77396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:22:18.138235   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:18.172461   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:18.172484   77396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:18.172527   77396 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.172572   77396 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.172589   77396 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.172646   77396 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0828 18:22:18.172819   77396 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.172608   77396 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.172823   77396 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.172990   77396 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174545   77396 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.174579   77396 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.174598   77396 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0828 18:22:18.174609   77396 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.174904   77396 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.415540   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0828 18:22:18.461528   77396 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0828 18:22:18.461577   77396 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0828 18:22:18.461617   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.466065   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.471602   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.476041   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.480111   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.484307   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.500185   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.519236   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.538341   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.614022   77396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0828 18:22:18.614068   77396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.614150   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649875   77396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0828 18:22:18.649927   77396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.649945   77396 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0828 18:22:18.649976   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649980   77396 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.650035   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.665128   77396 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0828 18:22:18.665173   77396 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.665225   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686246   77396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0828 18:22:18.686288   77396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.686303   77396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0828 18:22:18.686336   77396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.686375   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686417   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.686339   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686483   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.686527   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.686558   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.686599   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775824   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775875   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.803911   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.803983   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0828 18:22:18.822129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.822230   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.822232   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.912309   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.912514   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.912662   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:19.003169   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003183   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0828 18:22:19.003201   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:19.003137   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:19.003292   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:19.108957   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0828 18:22:19.109000   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0828 18:22:19.109047   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0828 18:22:19.108961   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0828 18:22:19.109144   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0828 18:22:19.340554   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:19.486655   77396 cache_images.go:92] duration metric: took 1.314154463s to LoadCachedImages
	W0828 18:22:19.486742   77396 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0828 18:22:19.486760   77396 kubeadm.go:934] updating node { 192.168.50.99 8443 v1.20.0 crio true true} ...
	I0828 18:22:19.486898   77396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-131737 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:19.486979   77396 ssh_runner.go:195] Run: crio config
	I0828 18:22:19.530549   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:22:19.530579   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:19.530592   77396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:19.530621   77396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.99 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-131737 NodeName:old-k8s-version-131737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0828 18:22:19.530797   77396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-131737"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:19.530870   77396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0828 18:22:19.545081   77396 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:19.545179   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:19.558002   77396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0828 18:22:19.577056   77396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:19.595848   77396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0828 18:22:19.614164   77396 ssh_runner.go:195] Run: grep 192.168.50.99	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:19.618274   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.99	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:19.631776   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:19.775809   77396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:19.793491   77396 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737 for IP: 192.168.50.99
	I0828 18:22:19.793521   77396 certs.go:194] generating shared ca certs ...
	I0828 18:22:19.793544   77396 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:19.793722   77396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:19.793776   77396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:19.793788   77396 certs.go:256] generating profile certs ...
	I0828 18:22:19.793928   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.key
	I0828 18:22:19.793993   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key.131f8aa0
	I0828 18:22:19.794043   77396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key
	I0828 18:22:19.794211   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:19.794279   77396 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:19.794292   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:19.794322   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:19.794353   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:19.794379   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:19.794447   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:19.795621   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:19.831614   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:19.874281   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:19.927912   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:19.967892   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 18:22:20.010378   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:22:20.036730   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:20.064707   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:22:20.089246   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:20.116913   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:20.151729   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:20.174509   77396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:20.190911   77396 ssh_runner.go:195] Run: openssl version
	I0828 18:22:16.865253   76486 pod_ready.go:103] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:17.867833   76486 pod_ready.go:93] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.867859   76486 pod_ready.go:82] duration metric: took 3.009384484s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.867869   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.875975   76486 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.876008   76486 pod_ready.go:82] duration metric: took 8.131826ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.876022   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883334   76486 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.883363   76486 pod_ready.go:82] duration metric: took 1.007332551s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883377   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890003   76486 pod_ready.go:93] pod "kube-proxy-lmpft" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.890032   76486 pod_ready.go:82] duration metric: took 6.647273ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890045   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895629   76486 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.895658   76486 pod_ready.go:82] duration metric: took 5.60504ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895672   76486 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:16.595708   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:16.596190   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:16.596219   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:16.596152   78447 retry.go:31] will retry after 1.127921402s: waiting for machine to come up
	I0828 18:22:17.725174   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:17.725707   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:17.725736   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:17.725653   78447 retry.go:31] will retry after 959.892711ms: waiting for machine to come up
	I0828 18:22:18.686818   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:18.687269   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:18.687291   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:18.687225   78447 retry.go:31] will retry after 1.541922737s: waiting for machine to come up
	I0828 18:22:20.231099   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:20.231669   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:20.231697   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:20.231621   78447 retry.go:31] will retry after 1.601924339s: waiting for machine to come up
	I0828 18:22:20.743848   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:22.745091   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:20.198369   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:20.208787   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213735   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213798   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.219855   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:20.230970   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:20.243428   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248105   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248169   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.253803   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:20.264495   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:20.275530   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280118   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280179   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.286135   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:20.296995   77396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:20.302843   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:20.309214   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:20.314977   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:20.321177   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:20.327689   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:20.334176   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:20.340478   77396 kubeadm.go:392] StartCluster: {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:20.340589   77396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:20.340666   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.377288   77396 cri.go:89] found id: ""
	I0828 18:22:20.377366   77396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:20.387774   77396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:20.387796   77396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:20.387846   77396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:20.398086   77396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:20.399369   77396 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-131737" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:20.400118   77396 kubeconfig.go:62] /home/jenkins/minikube-integration/19529-10317/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-131737" cluster setting kubeconfig missing "old-k8s-version-131737" context setting]
	I0828 18:22:20.401248   77396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:20.464577   77396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:20.475116   77396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.99
	I0828 18:22:20.475161   77396 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:20.475172   77396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:20.475233   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.509801   77396 cri.go:89] found id: ""
	I0828 18:22:20.509881   77396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:20.527245   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:20.537526   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:20.537548   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:20.537603   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:20.546096   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:20.546168   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:20.555608   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:20.564344   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:20.564405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:20.573551   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.582191   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:20.582248   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.592105   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:20.601563   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:20.601624   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:20.612220   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:20.621113   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:20.738800   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.351223   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.564678   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.659764   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.748789   77396 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:21.748886   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.249370   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.749578   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.249982   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.749304   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.249774   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.749363   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:20.928806   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:23.402840   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:21.835332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:21.835849   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:21.835884   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:21.835787   78447 retry.go:31] will retry after 2.437330454s: waiting for machine to come up
	I0828 18:22:24.275082   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:24.275523   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:24.275553   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:24.275493   78447 retry.go:31] will retry after 2.288360059s: waiting for machine to come up
	I0828 18:22:26.564963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:26.565404   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:26.565432   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:26.565358   78447 retry.go:31] will retry after 2.911207221s: waiting for machine to come up
	I0828 18:22:25.243485   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:27.744153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:25.249675   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.749573   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.249942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.249956   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.749065   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.249309   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.749697   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.249151   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.749206   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.902220   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:28.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.402648   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:29.479385   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479953   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has current primary IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479975   75908 main.go:141] libmachine: (no-preload-072854) Found IP for machine: 192.168.61.138
	I0828 18:22:29.479988   75908 main.go:141] libmachine: (no-preload-072854) Reserving static IP address...
	I0828 18:22:29.480455   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.480476   75908 main.go:141] libmachine: (no-preload-072854) Reserved static IP address: 192.168.61.138
	I0828 18:22:29.480490   75908 main.go:141] libmachine: (no-preload-072854) DBG | skip adding static IP to network mk-no-preload-072854 - found existing host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"}
	I0828 18:22:29.480500   75908 main.go:141] libmachine: (no-preload-072854) DBG | Getting to WaitForSSH function...
	I0828 18:22:29.480509   75908 main.go:141] libmachine: (no-preload-072854) Waiting for SSH to be available...
	I0828 18:22:29.483163   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483478   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.483509   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483617   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH client type: external
	I0828 18:22:29.483636   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa (-rw-------)
	I0828 18:22:29.483673   75908 main.go:141] libmachine: (no-preload-072854) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:29.483691   75908 main.go:141] libmachine: (no-preload-072854) DBG | About to run SSH command:
	I0828 18:22:29.483705   75908 main.go:141] libmachine: (no-preload-072854) DBG | exit 0
	I0828 18:22:29.606048   75908 main.go:141] libmachine: (no-preload-072854) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:29.606410   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetConfigRaw
	I0828 18:22:29.607071   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.609374   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609733   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.609763   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609984   75908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/config.json ...
	I0828 18:22:29.610223   75908 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:29.610245   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:29.610451   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.612963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613409   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.613431   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.613688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613988   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.614165   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.614339   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.614355   75908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:29.714325   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:29.714360   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714596   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:22:29.714621   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714829   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.717545   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.717914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.717939   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.718102   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.718312   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718513   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718676   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.718848   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.719009   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.719026   75908 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-072854 && echo "no-preload-072854" | sudo tee /etc/hostname
	I0828 18:22:29.835992   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-072854
	
	I0828 18:22:29.836024   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.839134   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839621   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.839654   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839909   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.840128   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840324   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840540   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.840742   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.840973   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.841005   75908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-072854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-072854/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-072854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:29.951089   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:29.951125   75908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:29.951149   75908 buildroot.go:174] setting up certificates
	I0828 18:22:29.951162   75908 provision.go:84] configureAuth start
	I0828 18:22:29.951178   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.951496   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.954309   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954663   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.954694   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.957076   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957345   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.957365   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957550   75908 provision.go:143] copyHostCerts
	I0828 18:22:29.957606   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:29.957624   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:29.957683   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:29.957792   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:29.957807   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:29.957831   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:29.957913   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:29.957924   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:29.957951   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:29.958060   75908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.no-preload-072854 san=[127.0.0.1 192.168.61.138 localhost minikube no-preload-072854]
	I0828 18:22:30.038643   75908 provision.go:177] copyRemoteCerts
	I0828 18:22:30.038705   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:30.038730   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.041574   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.041914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.041946   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.042125   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.042306   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.042460   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.042618   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.124224   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:30.148835   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0828 18:22:30.171599   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:22:30.195349   75908 provision.go:87] duration metric: took 244.171371ms to configureAuth
	I0828 18:22:30.195375   75908 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:30.195580   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:30.195665   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.198535   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.198938   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.198961   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.199171   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.199349   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199490   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199727   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.199917   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.200104   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.200125   75908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:30.422282   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:30.422314   75908 machine.go:96] duration metric: took 812.07707ms to provisionDockerMachine
	I0828 18:22:30.422328   75908 start.go:293] postStartSetup for "no-preload-072854" (driver="kvm2")
	I0828 18:22:30.422341   75908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:30.422361   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.422658   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:30.422688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.425627   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426006   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.426047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426199   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.426401   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.426539   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.426675   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.508399   75908 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:30.512395   75908 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:30.512418   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:30.512505   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:30.512603   75908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:30.512723   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:30.522105   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:30.545166   75908 start.go:296] duration metric: took 122.822966ms for postStartSetup
	I0828 18:22:30.545203   75908 fix.go:56] duration metric: took 18.554447914s for fixHost
	I0828 18:22:30.545221   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.548255   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548658   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.548683   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548867   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.549078   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549251   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549378   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.549555   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.549774   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.549788   75908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:30.650663   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869350.622150588
	
	I0828 18:22:30.650688   75908 fix.go:216] guest clock: 1724869350.622150588
	I0828 18:22:30.650699   75908 fix.go:229] Guest: 2024-08-28 18:22:30.622150588 +0000 UTC Remote: 2024-08-28 18:22:30.545207555 +0000 UTC m=+354.015941485 (delta=76.943033ms)
	I0828 18:22:30.650723   75908 fix.go:200] guest clock delta is within tolerance: 76.943033ms
	I0828 18:22:30.650741   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 18.660017717s
	I0828 18:22:30.650770   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.651011   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:30.653715   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654110   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.654150   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654274   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.654882   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655093   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655173   75908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:30.655235   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.655319   75908 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:30.655339   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.658052   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658097   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658440   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658470   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658507   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658520   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658677   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658804   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658899   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659098   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659131   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659272   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659276   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.659426   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.769716   75908 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:30.775522   75908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:30.918471   75908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:30.924338   75908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:30.924416   75908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:30.939462   75908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:30.939489   75908 start.go:495] detecting cgroup driver to use...
	I0828 18:22:30.939589   75908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:30.956324   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:30.970243   75908 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:30.970319   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:30.983636   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:30.996989   75908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:31.116994   75908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:31.290216   75908 docker.go:233] disabling docker service ...
	I0828 18:22:31.290291   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:31.305578   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:31.318402   75908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:31.446431   75908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:31.570180   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:31.583862   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:31.602513   75908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:22:31.602577   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.613726   75908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:31.613798   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.627405   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.638648   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.648905   75908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:31.660365   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.670925   75908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.689052   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.699345   75908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:31.708691   75908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:31.708753   75908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:31.721500   75908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:31.730798   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:31.858773   75908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:31.945345   75908 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:31.945419   75908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:31.949720   75908 start.go:563] Will wait 60s for crictl version
	I0828 18:22:31.949784   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:31.953193   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:31.990360   75908 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:31.990440   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.019756   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.048117   75908 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:22:29.744207   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.243511   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.249883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:30.749652   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.249973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.249415   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.749545   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.249768   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.749104   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.249819   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.749727   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.901907   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:34.907432   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.049494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:32.052227   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052548   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:32.052585   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052800   75908 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:32.056788   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:32.068700   75908 kubeadm.go:883] updating cluster {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:32.068814   75908 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:22:32.068847   75908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:32.103085   75908 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:22:32.103111   75908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:32.103153   75908 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.103194   75908 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.103240   75908 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.103260   75908 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.103331   75908 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.103379   75908 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.103433   75908 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.103242   75908 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104775   75908 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.104806   75908 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.104829   75908 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.104777   75908 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.104781   75908 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.343173   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0828 18:22:32.343209   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.409616   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.418908   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.447831   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.453065   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.453813   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.494045   75908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0828 18:22:32.494090   75908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0828 18:22:32.494121   75908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.494122   75908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.494157   75908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0828 18:22:32.494168   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494169   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494179   75908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.494209   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546592   75908 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0828 18:22:32.546634   75908 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.546655   75908 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0828 18:22:32.546682   75908 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.546698   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546724   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546807   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.546829   75908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0828 18:22:32.546849   75908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.546880   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.546891   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546910   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.557550   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.593306   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.593328   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.648848   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.648913   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.648922   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.648973   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.704513   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.717712   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.779954   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.780015   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.780080   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.780148   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.814614   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.821580   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0828 18:22:32.821660   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.901464   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0828 18:22:32.901584   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:32.905004   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0828 18:22:32.905036   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0828 18:22:32.905102   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:32.905103   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0828 18:22:32.905144   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0828 18:22:32.905160   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905190   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905105   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:32.905191   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:32.905205   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.907869   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0828 18:22:33.324215   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292175   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.386961854s)
	I0828 18:22:35.292205   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0828 18:22:35.292234   75908 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292245   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.387114296s)
	I0828 18:22:35.292273   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0828 18:22:35.292301   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292314   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.386985678s)
	I0828 18:22:35.292354   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0828 18:22:35.292358   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.387036145s)
	I0828 18:22:35.292367   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.387143897s)
	I0828 18:22:35.292375   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0828 18:22:35.292385   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0828 18:22:35.292409   75908 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.968164241s)
	I0828 18:22:35.292446   75908 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0828 18:22:35.292456   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:35.292479   75908 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292536   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:34.243832   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:36.744323   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:35.249587   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:35.749826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.249647   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.749792   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.249845   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.249577   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.749412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.249047   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.749564   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.402943   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:39.901715   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:37.064442   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.772111922s)
	I0828 18:22:37.064476   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0828 18:22:37.064498   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.064500   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.772021571s)
	I0828 18:22:37.064529   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0828 18:22:37.064536   75908 ssh_runner.go:235] Completed: which crictl: (1.771982077s)
	I0828 18:22:37.064603   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:37.064550   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.121169   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933342   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.868675318s)
	I0828 18:22:38.933379   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0828 18:22:38.933390   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.812184072s)
	I0828 18:22:38.933486   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933400   75908 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.933543   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.983461   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0828 18:22:38.983579   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:39.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:41.243732   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:40.249307   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:40.749120   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.249107   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.749895   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.249941   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.748952   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.249788   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.749898   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.249654   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.749350   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.903470   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:44.403257   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:42.534353   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.550744503s)
	I0828 18:22:42.534392   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0828 18:22:42.534430   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600866705s)
	I0828 18:22:42.534448   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0828 18:22:42.534472   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:42.534521   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:44.602703   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.068154029s)
	I0828 18:22:44.602738   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0828 18:22:44.602765   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:44.602809   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:45.948751   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.345914789s)
	I0828 18:22:45.948794   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0828 18:22:45.948821   75908 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:45.948874   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:43.742979   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.743892   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:47.745070   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.249353   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:45.749091   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.249897   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.748991   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.249385   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.749204   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.248962   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.749853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.249574   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.749028   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.403322   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:48.902485   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:46.594343   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0828 18:22:46.594405   75908 cache_images.go:123] Successfully loaded all cached images
	I0828 18:22:46.594413   75908 cache_images.go:92] duration metric: took 14.491290737s to LoadCachedImages
	I0828 18:22:46.594428   75908 kubeadm.go:934] updating node { 192.168.61.138 8443 v1.31.0 crio true true} ...
	I0828 18:22:46.594562   75908 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-072854 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:46.594627   75908 ssh_runner.go:195] Run: crio config
	I0828 18:22:46.641210   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:46.641230   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:46.641240   75908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:46.641260   75908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-072854 NodeName:no-preload-072854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:22:46.641417   75908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-072854"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:46.641507   75908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:22:46.653042   75908 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:46.653110   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:46.671775   75908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0828 18:22:46.691485   75908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:46.707525   75908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0828 18:22:46.723642   75908 ssh_runner.go:195] Run: grep 192.168.61.138	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:46.727148   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:46.738598   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:46.877354   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:46.896287   75908 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854 for IP: 192.168.61.138
	I0828 18:22:46.896309   75908 certs.go:194] generating shared ca certs ...
	I0828 18:22:46.896324   75908 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:46.896488   75908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:46.896543   75908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:46.896578   75908 certs.go:256] generating profile certs ...
	I0828 18:22:46.896694   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/client.key
	I0828 18:22:46.896777   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key.f9122682
	I0828 18:22:46.896833   75908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key
	I0828 18:22:46.896945   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:46.896975   75908 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:46.896984   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:46.897006   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:46.897028   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:46.897050   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:46.897086   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:46.897777   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:46.940603   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:46.971255   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:47.009269   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:47.043849   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 18:22:47.081562   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 18:22:47.104248   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:47.127680   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 18:22:47.150718   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:47.171449   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:47.192814   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:47.213607   75908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:47.229589   75908 ssh_runner.go:195] Run: openssl version
	I0828 18:22:47.235107   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:47.245976   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250512   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250568   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.256305   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:47.267080   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:47.276961   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281311   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281388   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.286823   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:47.298010   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:47.309303   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313555   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313604   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.319146   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:47.329851   75908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:47.333891   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:47.339544   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:47.344883   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:47.350419   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:47.355560   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:47.360987   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:47.366392   75908 kubeadm.go:392] StartCluster: {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:47.366472   75908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:47.366518   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.407218   75908 cri.go:89] found id: ""
	I0828 18:22:47.407283   75908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:47.418518   75908 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:47.418541   75908 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:47.418599   75908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:47.429592   75908 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:47.430649   75908 kubeconfig.go:125] found "no-preload-072854" server: "https://192.168.61.138:8443"
	I0828 18:22:47.432727   75908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:47.443042   75908 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.138
	I0828 18:22:47.443072   75908 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:47.443084   75908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:47.443132   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.483840   75908 cri.go:89] found id: ""
	I0828 18:22:47.483906   75908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:47.499558   75908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:47.508932   75908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:47.508954   75908 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:47.508998   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:47.519003   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:47.519082   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:47.528248   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:47.536682   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:47.536744   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:47.545411   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.553945   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:47.554005   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.562837   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:47.571080   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:47.571141   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:47.579788   75908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:47.590221   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:47.707814   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.459935   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.669459   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.772934   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.886910   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:48.887010   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.387963   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.887167   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.923097   75908 api_server.go:72] duration metric: took 1.036200671s to wait for apiserver process to appear ...
	I0828 18:22:49.923147   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:49.923182   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:50.244153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.245033   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.835389   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:52.835424   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:52.835439   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.938497   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.938528   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:52.938541   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.943233   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.943256   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.423531   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.428654   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.428675   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.924251   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.963729   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.963759   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:54.423241   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:54.430345   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:22:54.436835   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:54.436858   75908 api_server.go:131] duration metric: took 4.513702157s to wait for apiserver health ...
	I0828 18:22:54.436867   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:54.436873   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:54.438482   75908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:50.249726   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:50.749045   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.249609   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.749060   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.249827   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.748985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.248958   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.748960   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.249581   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.749175   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.404355   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:53.904030   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:54.439656   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:54.453060   75908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:54.473537   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:54.489302   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:54.489340   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:54.489352   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:54.489369   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:54.489380   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:54.489392   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:54.489404   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:54.489414   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:54.489425   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:54.489434   75908 system_pods.go:74] duration metric: took 15.875803ms to wait for pod list to return data ...
	I0828 18:22:54.489446   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:54.494398   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:54.494428   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:54.494441   75908 node_conditions.go:105] duration metric: took 4.987547ms to run NodePressure ...
	I0828 18:22:54.494462   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:54.766427   75908 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771542   75908 kubeadm.go:739] kubelet initialised
	I0828 18:22:54.771571   75908 kubeadm.go:740] duration metric: took 5.116897ms waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771582   75908 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:54.777783   75908 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.787163   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787193   75908 pod_ready.go:82] duration metric: took 9.382038ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.787205   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787215   75908 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.791786   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791810   75908 pod_ready.go:82] duration metric: took 4.586002ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.791818   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791826   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.796201   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796220   75908 pod_ready.go:82] duration metric: took 4.388906ms for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.796228   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796234   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.877071   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877104   75908 pod_ready.go:82] duration metric: took 80.86176ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.877118   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877127   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.277179   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277206   75908 pod_ready.go:82] duration metric: took 400.069901ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.277215   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277223   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.676857   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676887   75908 pod_ready.go:82] duration metric: took 399.658558ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.676898   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676904   75908 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:56.077491   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077525   75908 pod_ready.go:82] duration metric: took 400.610612ms for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:56.077535   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077543   75908 pod_ready.go:39] duration metric: took 1.305948645s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:56.077559   75908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:56.090851   75908 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:56.090878   75908 kubeadm.go:597] duration metric: took 8.672328864s to restartPrimaryControlPlane
	I0828 18:22:56.090889   75908 kubeadm.go:394] duration metric: took 8.724501209s to StartCluster
	I0828 18:22:56.090909   75908 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.090980   75908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:56.092859   75908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.093177   75908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:56.093304   75908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:56.093391   75908 addons.go:69] Setting storage-provisioner=true in profile "no-preload-072854"
	I0828 18:22:56.093386   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:56.093415   75908 addons.go:69] Setting default-storageclass=true in profile "no-preload-072854"
	I0828 18:22:56.093472   75908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-072854"
	I0828 18:22:56.093457   75908 addons.go:69] Setting metrics-server=true in profile "no-preload-072854"
	I0828 18:22:56.093501   75908 addons.go:234] Setting addon metrics-server=true in "no-preload-072854"
	I0828 18:22:56.093429   75908 addons.go:234] Setting addon storage-provisioner=true in "no-preload-072854"
	W0828 18:22:56.093516   75908 addons.go:243] addon metrics-server should already be in state true
	W0828 18:22:56.093518   75908 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093869   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093904   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093994   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.094069   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.094796   75908 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:56.096268   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:56.110476   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0828 18:22:56.110685   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0828 18:22:56.110791   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0828 18:22:56.111030   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111183   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111453   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111592   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111603   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111710   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111720   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111820   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111839   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111892   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112043   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112214   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112402   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.112440   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112474   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.112669   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112711   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.115984   75908 addons.go:234] Setting addon default-storageclass=true in "no-preload-072854"
	W0828 18:22:56.116000   75908 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:56.116020   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.116245   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.116280   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.127848   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35747
	I0828 18:22:56.134902   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.135863   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.135892   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.136351   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.136536   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.138800   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.140837   75908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:56.142271   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:56.142290   75908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:56.142311   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.145770   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146271   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.146332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146572   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.146787   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.146958   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.147097   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.158402   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I0828 18:22:56.158948   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.159531   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.159555   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.159622   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0828 18:22:56.160033   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.160108   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.160578   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.160608   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.160864   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.160876   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.161318   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.161543   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.163449   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.165347   75908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:56.166532   75908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.166547   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:56.166564   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.170058   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170510   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.170536   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170718   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.170900   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.171055   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.171193   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.177056   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I0828 18:22:56.177458   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.177969   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.178001   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.178335   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.178537   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.180056   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.180261   75908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.180274   75908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:56.180288   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.182971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183550   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.183576   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183726   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.183879   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.184042   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.184212   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.333329   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:56.363605   75908 node_ready.go:35] waiting up to 6m0s for node "no-preload-072854" to be "Ready" ...
	I0828 18:22:56.444569   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:56.444591   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:56.466266   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:56.466288   75908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:56.472695   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.494468   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:56.494496   75908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:56.499713   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.549699   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:57.391629   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391655   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.391634   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391724   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392046   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392063   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392072   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392068   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392080   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392108   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392046   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392127   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392144   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392152   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392322   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392336   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.393780   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.393802   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.393846   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.397916   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.397937   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.398164   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.398183   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.398202   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520056   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520082   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520358   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520373   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520392   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520435   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520458   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520699   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520714   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520725   75908 addons.go:475] Verifying addon metrics-server=true in "no-preload-072854"
	I0828 18:22:57.522537   75908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:54.742708   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:56.744595   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:55.248933   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:55.749502   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.249976   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.749648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.249544   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.749769   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.249492   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.749787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.249693   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.749781   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.402039   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:58.901738   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:57.523745   75908 addons.go:510] duration metric: took 1.430442724s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:58.367342   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:00.867911   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:59.243496   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:01.244209   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:00.249249   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.749724   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.248973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.748932   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.249474   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.749966   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.249404   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.248943   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.749828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.902675   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:03.402001   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:02.868286   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:03.367260   75908 node_ready.go:49] node "no-preload-072854" has status "Ready":"True"
	I0828 18:23:03.367286   75908 node_ready.go:38] duration metric: took 7.003649083s for node "no-preload-072854" to be "Ready" ...
	I0828 18:23:03.367296   75908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:23:03.372211   75908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376919   75908 pod_ready.go:93] pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.376944   75908 pod_ready.go:82] duration metric: took 4.710919ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376954   75908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381043   75908 pod_ready.go:93] pod "etcd-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.381066   75908 pod_ready.go:82] duration metric: took 4.10571ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381078   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:05.388413   75908 pod_ready.go:103] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.387040   75908 pod_ready.go:93] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.387060   75908 pod_ready.go:82] duration metric: took 3.005974723s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.387070   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391257   75908 pod_ready.go:93] pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.391276   75908 pod_ready.go:82] duration metric: took 4.19923ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391285   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396819   75908 pod_ready.go:93] pod "kube-proxy-tfxfd" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.396836   75908 pod_ready.go:82] duration metric: took 5.545346ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396845   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.743752   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.242657   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.243781   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:05.249882   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.749888   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.249648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.749518   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.249032   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.249738   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.749748   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.249670   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.749246   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.906344   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.401488   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.402915   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.568922   75908 pod_ready.go:93] pod "kube-scheduler-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.568948   75908 pod_ready.go:82] duration metric: took 172.096644ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.568964   75908 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:08.574813   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.576583   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.743641   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.243152   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.249340   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:10.749798   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.249721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.249779   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.249760   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.749029   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.249441   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.749641   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.903188   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.401514   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.076559   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.575593   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.742772   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.743273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.249678   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:15.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.249786   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.748968   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.249139   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.749721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.249749   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.749731   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.249576   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.749644   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.402418   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.902446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.575692   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.576073   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.744432   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.243417   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:20.249682   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:20.748965   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.249378   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.749011   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:21.749077   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:21.783557   77396 cri.go:89] found id: ""
	I0828 18:23:21.783581   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.783592   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:21.783600   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:21.783667   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:21.816332   77396 cri.go:89] found id: ""
	I0828 18:23:21.816366   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.816377   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:21.816385   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:21.816451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:21.850130   77396 cri.go:89] found id: ""
	I0828 18:23:21.850157   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.850168   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:21.850175   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:21.850240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:21.887000   77396 cri.go:89] found id: ""
	I0828 18:23:21.887028   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.887037   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:21.887045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:21.887106   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:21.922052   77396 cri.go:89] found id: ""
	I0828 18:23:21.922095   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.922106   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:21.922114   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:21.922169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:21.968838   77396 cri.go:89] found id: ""
	I0828 18:23:21.968865   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.968872   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:21.968879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:21.968937   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:22.005361   77396 cri.go:89] found id: ""
	I0828 18:23:22.005387   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.005397   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:22.005404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:22.005465   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:22.043999   77396 cri.go:89] found id: ""
	I0828 18:23:22.044026   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.044034   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:22.044042   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:22.044054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:22.092612   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:22.092641   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:22.105847   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:22.105870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:22.230236   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:22.230254   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:22.230267   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:22.305648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:22.305712   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:24.843524   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:24.856321   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:24.856412   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:24.891356   77396 cri.go:89] found id: ""
	I0828 18:23:24.891395   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.891406   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:24.891414   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:24.891476   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:24.923476   77396 cri.go:89] found id: ""
	I0828 18:23:24.923504   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.923515   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:24.923522   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:24.923583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:24.955453   77396 cri.go:89] found id: ""
	I0828 18:23:24.955482   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.955493   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:24.955499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:24.955564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:24.991349   77396 cri.go:89] found id: ""
	I0828 18:23:24.991377   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.991384   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:24.991394   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:24.991448   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:25.026464   77396 cri.go:89] found id: ""
	I0828 18:23:25.026493   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.026501   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:25.026508   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:25.026559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:25.066989   77396 cri.go:89] found id: ""
	I0828 18:23:25.067021   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.067045   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:25.067053   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:25.067123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:25.111327   77396 cri.go:89] found id: ""
	I0828 18:23:25.111358   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.111369   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:25.111377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:25.111442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:25.159672   77396 cri.go:89] found id: ""
	I0828 18:23:25.159698   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.159707   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:25.159715   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:25.159726   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:21.902745   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.075480   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.575344   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.743311   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.743442   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:25.216755   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:25.216788   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:25.230365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:25.230399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:25.303227   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:25.303253   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:25.303276   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:25.378467   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:25.378501   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:27.915420   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:27.927659   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:27.927726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:27.961535   77396 cri.go:89] found id: ""
	I0828 18:23:27.961560   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.961568   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:27.961573   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:27.961618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:27.993707   77396 cri.go:89] found id: ""
	I0828 18:23:27.993732   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.993739   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:27.993745   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:27.993792   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:28.027410   77396 cri.go:89] found id: ""
	I0828 18:23:28.027438   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.027445   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:28.027451   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:28.027509   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:28.063874   77396 cri.go:89] found id: ""
	I0828 18:23:28.063909   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.063918   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:28.063924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:28.063974   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:28.096726   77396 cri.go:89] found id: ""
	I0828 18:23:28.096755   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.096763   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:28.096769   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:28.096826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:28.129538   77396 cri.go:89] found id: ""
	I0828 18:23:28.129562   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.129570   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:28.129576   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:28.129633   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:28.167785   77396 cri.go:89] found id: ""
	I0828 18:23:28.167813   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.167821   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:28.167827   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:28.167881   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:28.200417   77396 cri.go:89] found id: ""
	I0828 18:23:28.200445   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.200456   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:28.200467   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:28.200481   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:28.214025   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:28.214054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:28.280106   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:28.280126   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:28.280139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:28.359834   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:28.359875   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:28.399997   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:28.400028   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:26.902287   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.403446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.576035   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.075134   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.080674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:28.744552   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.243825   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:30.950870   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:30.967367   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:30.967426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:31.007843   77396 cri.go:89] found id: ""
	I0828 18:23:31.007873   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.007882   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:31.007890   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:31.007949   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:31.056710   77396 cri.go:89] found id: ""
	I0828 18:23:31.056744   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.056756   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:31.056764   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:31.056824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:31.101177   77396 cri.go:89] found id: ""
	I0828 18:23:31.101208   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.101218   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:31.101225   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:31.101283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:31.135513   77396 cri.go:89] found id: ""
	I0828 18:23:31.135548   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.135560   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:31.135568   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:31.135635   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:31.172887   77396 cri.go:89] found id: ""
	I0828 18:23:31.172921   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.172932   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:31.172939   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:31.173006   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:31.207744   77396 cri.go:89] found id: ""
	I0828 18:23:31.207775   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.207788   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:31.207795   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:31.207873   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:31.242954   77396 cri.go:89] found id: ""
	I0828 18:23:31.242984   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.242995   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:31.243003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:31.243063   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:31.277382   77396 cri.go:89] found id: ""
	I0828 18:23:31.277418   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.277427   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:31.277436   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:31.277448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.315688   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:31.315722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:31.367565   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:31.367596   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:31.380803   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:31.380839   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:31.447184   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:31.447214   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:31.447229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.022521   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:34.036551   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:34.036615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:34.074735   77396 cri.go:89] found id: ""
	I0828 18:23:34.074763   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.074772   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:34.074780   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:34.074836   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:34.113604   77396 cri.go:89] found id: ""
	I0828 18:23:34.113631   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.113642   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:34.113649   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:34.113711   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:34.152658   77396 cri.go:89] found id: ""
	I0828 18:23:34.152687   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.152701   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:34.152707   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:34.152753   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:34.188748   77396 cri.go:89] found id: ""
	I0828 18:23:34.188775   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.188784   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:34.188789   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:34.188847   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:34.221553   77396 cri.go:89] found id: ""
	I0828 18:23:34.221584   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.221595   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:34.221602   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:34.221666   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:34.257809   77396 cri.go:89] found id: ""
	I0828 18:23:34.257833   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.257843   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:34.257850   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:34.257935   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:34.291217   77396 cri.go:89] found id: ""
	I0828 18:23:34.291246   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.291253   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:34.291261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:34.291327   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:34.324084   77396 cri.go:89] found id: ""
	I0828 18:23:34.324114   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.324122   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:34.324133   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:34.324147   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:34.373802   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:34.373838   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:34.386779   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:34.386807   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:34.457396   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:34.457413   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:34.457428   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.531549   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:34.531590   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.901633   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:34.402475   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.576038   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:36.075226   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:35.743297   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.744669   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.068985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:37.083317   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:37.083383   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:37.117109   77396 cri.go:89] found id: ""
	I0828 18:23:37.117144   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.117156   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:37.117164   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:37.117225   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:37.150151   77396 cri.go:89] found id: ""
	I0828 18:23:37.150180   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.150189   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:37.150194   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:37.150249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:37.184263   77396 cri.go:89] found id: ""
	I0828 18:23:37.184289   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.184298   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:37.184303   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:37.184358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:37.214442   77396 cri.go:89] found id: ""
	I0828 18:23:37.214468   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.214476   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:37.214481   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:37.214545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:37.251690   77396 cri.go:89] found id: ""
	I0828 18:23:37.251723   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.251732   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:37.251738   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:37.251790   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:37.286900   77396 cri.go:89] found id: ""
	I0828 18:23:37.286929   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.286939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:37.286946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:37.287026   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:37.324010   77396 cri.go:89] found id: ""
	I0828 18:23:37.324039   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.324049   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:37.324057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:37.324114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:37.359723   77396 cri.go:89] found id: ""
	I0828 18:23:37.359777   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.359785   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:37.359813   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:37.359829   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:37.411363   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:37.411395   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:37.425078   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:37.425108   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:37.498351   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:37.498374   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:37.498399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:37.580149   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:37.580187   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:40.119822   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:40.134555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:40.134613   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:40.173129   77396 cri.go:89] found id: ""
	I0828 18:23:40.173156   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.173164   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:40.173170   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:40.173218   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:36.902004   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:39.401256   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:38.575639   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.575835   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.243909   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.743492   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.205445   77396 cri.go:89] found id: ""
	I0828 18:23:40.205470   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.205477   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:40.205482   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:40.205536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:40.237018   77396 cri.go:89] found id: ""
	I0828 18:23:40.237046   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.237057   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:40.237064   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:40.237124   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:40.271188   77396 cri.go:89] found id: ""
	I0828 18:23:40.271220   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.271232   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:40.271239   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:40.271302   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:40.304532   77396 cri.go:89] found id: ""
	I0828 18:23:40.304566   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.304577   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:40.304585   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:40.304652   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:40.338114   77396 cri.go:89] found id: ""
	I0828 18:23:40.338145   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.338156   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:40.338165   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:40.338227   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:40.370126   77396 cri.go:89] found id: ""
	I0828 18:23:40.370160   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.370176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:40.370184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:40.370247   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:40.406139   77396 cri.go:89] found id: ""
	I0828 18:23:40.406167   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.406176   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:40.406186   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:40.406201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:40.459364   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:40.459404   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:40.472467   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:40.472496   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:40.546389   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:40.546420   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:40.546438   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:40.628550   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:40.628586   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:43.170210   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:43.183441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:43.183516   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:43.215798   77396 cri.go:89] found id: ""
	I0828 18:23:43.215823   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.215834   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:43.215841   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:43.215905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:43.250001   77396 cri.go:89] found id: ""
	I0828 18:23:43.250027   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.250035   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:43.250041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:43.250110   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:43.284621   77396 cri.go:89] found id: ""
	I0828 18:23:43.284654   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.284662   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:43.284668   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:43.284716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:43.318780   77396 cri.go:89] found id: ""
	I0828 18:23:43.318805   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.318815   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:43.318821   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:43.318866   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:43.351788   77396 cri.go:89] found id: ""
	I0828 18:23:43.351810   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.351818   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:43.351823   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:43.351872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:43.388719   77396 cri.go:89] found id: ""
	I0828 18:23:43.388745   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.388755   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:43.388761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:43.388810   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:43.423250   77396 cri.go:89] found id: ""
	I0828 18:23:43.423273   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.423283   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:43.423290   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:43.423376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:43.464644   77396 cri.go:89] found id: ""
	I0828 18:23:43.464672   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.464683   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:43.464693   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:43.464708   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:43.517422   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:43.517457   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:43.530317   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:43.530342   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:43.599776   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:43.599795   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:43.599806   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:43.679377   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:43.679409   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:41.401619   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:43.403142   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.576264   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.076333   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.242626   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.243310   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:46.215985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:46.229564   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:46.229632   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:46.267425   77396 cri.go:89] found id: ""
	I0828 18:23:46.267453   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.267464   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:46.267472   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:46.267534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:46.302532   77396 cri.go:89] found id: ""
	I0828 18:23:46.302562   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.302573   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:46.302580   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:46.302645   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:46.338197   77396 cri.go:89] found id: ""
	I0828 18:23:46.338226   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.338237   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:46.338244   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:46.338305   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:46.371503   77396 cri.go:89] found id: ""
	I0828 18:23:46.371528   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.371535   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:46.371542   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:46.371606   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:46.406364   77396 cri.go:89] found id: ""
	I0828 18:23:46.406386   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.406399   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:46.406405   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:46.406451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:46.441519   77396 cri.go:89] found id: ""
	I0828 18:23:46.441547   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.441557   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:46.441565   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:46.441626   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:46.475413   77396 cri.go:89] found id: ""
	I0828 18:23:46.475445   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.475455   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:46.475465   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:46.475531   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:46.508722   77396 cri.go:89] found id: ""
	I0828 18:23:46.508752   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.508762   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:46.508772   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:46.508790   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:46.564737   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:46.564776   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:46.578833   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:46.578860   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:46.649533   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:46.649554   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:46.649566   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:46.725738   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:46.725780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.263052   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:49.275342   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:49.275403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:49.310092   77396 cri.go:89] found id: ""
	I0828 18:23:49.310121   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.310131   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:49.310138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:49.310200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:49.347624   77396 cri.go:89] found id: ""
	I0828 18:23:49.347649   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.347657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:49.347662   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:49.347708   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:49.383801   77396 cri.go:89] found id: ""
	I0828 18:23:49.383827   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.383834   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:49.383840   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:49.383889   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:49.420443   77396 cri.go:89] found id: ""
	I0828 18:23:49.420470   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.420478   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:49.420484   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:49.420536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:49.452225   77396 cri.go:89] found id: ""
	I0828 18:23:49.452247   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.452255   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:49.452260   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:49.452306   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:49.486137   77396 cri.go:89] found id: ""
	I0828 18:23:49.486164   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.486172   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:49.486178   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:49.486224   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:49.519081   77396 cri.go:89] found id: ""
	I0828 18:23:49.519115   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.519126   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:49.519137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:49.519199   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:49.552903   77396 cri.go:89] found id: ""
	I0828 18:23:49.552932   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.552940   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:49.552948   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:49.552962   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:49.623963   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:49.624000   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:49.624023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:49.700684   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:49.700722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.738241   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:49.738265   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:49.786941   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:49.786976   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:45.901814   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.903106   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.905017   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.575690   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.576689   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.243535   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:51.243843   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:53.244097   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.300380   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:52.314281   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:52.314347   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:52.348497   77396 cri.go:89] found id: ""
	I0828 18:23:52.348522   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.348532   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:52.348539   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:52.348605   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:52.382060   77396 cri.go:89] found id: ""
	I0828 18:23:52.382107   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.382119   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:52.382127   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:52.382242   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:52.414306   77396 cri.go:89] found id: ""
	I0828 18:23:52.414335   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.414348   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:52.414356   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:52.414424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:52.448965   77396 cri.go:89] found id: ""
	I0828 18:23:52.448995   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.449005   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:52.449012   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:52.449079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:52.479102   77396 cri.go:89] found id: ""
	I0828 18:23:52.479129   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.479140   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:52.479148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:52.479213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:52.510025   77396 cri.go:89] found id: ""
	I0828 18:23:52.510051   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.510061   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:52.510068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:52.510171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:52.544472   77396 cri.go:89] found id: ""
	I0828 18:23:52.544501   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.544510   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:52.544517   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:52.544584   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:52.579962   77396 cri.go:89] found id: ""
	I0828 18:23:52.579986   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.579993   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:52.580000   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:52.580015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:52.631775   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:52.631809   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:52.645200   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:52.645230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:52.709318   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:52.709341   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:52.709355   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:52.788797   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:52.788834   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:52.402059   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.901750   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.075625   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.076533   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.743325   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.242726   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.324787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:55.338003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:55.338109   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:55.371733   77396 cri.go:89] found id: ""
	I0828 18:23:55.371757   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.371764   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:55.371770   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:55.371818   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:55.407922   77396 cri.go:89] found id: ""
	I0828 18:23:55.407944   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.407951   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:55.407957   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:55.408009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:55.443667   77396 cri.go:89] found id: ""
	I0828 18:23:55.443693   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.443700   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:55.443706   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:55.443761   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:55.478692   77396 cri.go:89] found id: ""
	I0828 18:23:55.478725   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.478735   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:55.478742   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:55.478804   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:55.512495   77396 cri.go:89] found id: ""
	I0828 18:23:55.512517   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.512525   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:55.512530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:55.512583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:55.546363   77396 cri.go:89] found id: ""
	I0828 18:23:55.546404   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.546415   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:55.546423   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:55.546478   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:55.579505   77396 cri.go:89] found id: ""
	I0828 18:23:55.579526   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.579533   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:55.579539   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:55.579588   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:55.610588   77396 cri.go:89] found id: ""
	I0828 18:23:55.610612   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.610628   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:55.610648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:55.610659   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:55.647289   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:55.647313   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:55.696660   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:55.696699   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:55.709215   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:55.709242   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:55.781755   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:55.781773   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:55.781786   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.359553   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:58.371960   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:58.372034   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:58.404455   77396 cri.go:89] found id: ""
	I0828 18:23:58.404481   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.404488   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:58.404494   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:58.404545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:58.436955   77396 cri.go:89] found id: ""
	I0828 18:23:58.436979   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.436989   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:58.436996   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:58.437055   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:58.467985   77396 cri.go:89] found id: ""
	I0828 18:23:58.468011   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.468021   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:58.468028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:58.468085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:58.500356   77396 cri.go:89] found id: ""
	I0828 18:23:58.500390   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.500398   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:58.500404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:58.500469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:58.538445   77396 cri.go:89] found id: ""
	I0828 18:23:58.538469   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.538477   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:58.538483   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:58.538541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:58.577827   77396 cri.go:89] found id: ""
	I0828 18:23:58.577851   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.577859   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:58.577867   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:58.577932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:58.611863   77396 cri.go:89] found id: ""
	I0828 18:23:58.611891   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.611902   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:58.611909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:58.611973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:58.646133   77396 cri.go:89] found id: ""
	I0828 18:23:58.646165   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.646175   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:58.646187   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:58.646204   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:58.659103   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:58.659134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:58.725271   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:58.725292   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:58.725310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.807171   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:58.807218   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:58.848245   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:58.848273   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:56.902329   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.902824   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:56.575727   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.576160   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.075851   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:00.243273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:02.247987   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.402171   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:01.415498   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:01.415574   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:01.449314   77396 cri.go:89] found id: ""
	I0828 18:24:01.449347   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.449355   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:01.449362   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:01.449425   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:01.485354   77396 cri.go:89] found id: ""
	I0828 18:24:01.485381   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.485388   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:01.485395   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:01.485439   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:01.518106   77396 cri.go:89] found id: ""
	I0828 18:24:01.518132   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.518139   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:01.518145   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:01.518191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:01.551298   77396 cri.go:89] found id: ""
	I0828 18:24:01.551329   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.551340   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:01.551348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:01.551406   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:01.587074   77396 cri.go:89] found id: ""
	I0828 18:24:01.587100   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.587107   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:01.587112   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:01.587158   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:01.619482   77396 cri.go:89] found id: ""
	I0828 18:24:01.619510   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.619518   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:01.619523   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:01.619575   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:01.651938   77396 cri.go:89] found id: ""
	I0828 18:24:01.651965   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.651972   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:01.651978   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:01.652039   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:01.685390   77396 cri.go:89] found id: ""
	I0828 18:24:01.685419   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.685429   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:01.685437   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:01.685448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.723631   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:01.723656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:01.777387   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:01.777422   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:01.793748   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:01.793781   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:01.857869   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:01.857901   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:01.857915   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.434883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:04.447876   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:04.447953   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:04.480730   77396 cri.go:89] found id: ""
	I0828 18:24:04.480762   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.480774   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:04.480781   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:04.480841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:04.514621   77396 cri.go:89] found id: ""
	I0828 18:24:04.514647   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.514657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:04.514664   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:04.514722   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:04.552044   77396 cri.go:89] found id: ""
	I0828 18:24:04.552071   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.552083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:04.552090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:04.552151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:04.587402   77396 cri.go:89] found id: ""
	I0828 18:24:04.587427   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.587440   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:04.587446   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:04.587506   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:04.619299   77396 cri.go:89] found id: ""
	I0828 18:24:04.619329   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.619337   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:04.619343   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:04.619393   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:04.659363   77396 cri.go:89] found id: ""
	I0828 18:24:04.659391   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.659399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:04.659408   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:04.659469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:04.691997   77396 cri.go:89] found id: ""
	I0828 18:24:04.692022   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.692030   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:04.692035   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:04.692089   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:04.725162   77396 cri.go:89] found id: ""
	I0828 18:24:04.725188   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.725196   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:04.725204   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:04.725215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:04.778072   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:04.778112   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:04.792571   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:04.792604   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:04.863074   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:04.863096   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:04.863107   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.958480   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:04.958516   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.401445   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.402916   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.575667   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:05.576444   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:04.744216   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.243680   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.498048   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:07.511286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:07.511350   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:07.554880   77396 cri.go:89] found id: ""
	I0828 18:24:07.554910   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.554921   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:07.554929   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:07.554990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:07.590593   77396 cri.go:89] found id: ""
	I0828 18:24:07.590621   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.590631   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:07.590641   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:07.590706   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:07.624067   77396 cri.go:89] found id: ""
	I0828 18:24:07.624096   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.624107   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:07.624113   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:07.624169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:07.657241   77396 cri.go:89] found id: ""
	I0828 18:24:07.657269   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.657277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:07.657282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:07.657341   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:07.702308   77396 cri.go:89] found id: ""
	I0828 18:24:07.702358   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.702368   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:07.702375   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:07.702438   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:07.736409   77396 cri.go:89] found id: ""
	I0828 18:24:07.736446   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.736454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:07.736459   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:07.736527   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:07.771001   77396 cri.go:89] found id: ""
	I0828 18:24:07.771029   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.771037   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:07.771043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:07.771090   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:07.807061   77396 cri.go:89] found id: ""
	I0828 18:24:07.807089   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.807099   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:07.807111   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:07.807125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:07.885254   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:07.885293   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:07.926920   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:07.926948   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:07.980485   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:07.980524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:07.994512   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:07.994545   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:08.071058   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:05.901817   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.902547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.402041   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.576656   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.077246   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:09.244155   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:11.743283   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.571233   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:10.586227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:10.586298   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:10.623971   77396 cri.go:89] found id: ""
	I0828 18:24:10.623997   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.624006   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:10.624014   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:10.624074   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:10.675472   77396 cri.go:89] found id: ""
	I0828 18:24:10.675506   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.675518   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:10.675526   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:10.675599   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:10.707885   77396 cri.go:89] found id: ""
	I0828 18:24:10.707913   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.707922   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:10.707931   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:10.707991   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:10.740896   77396 cri.go:89] found id: ""
	I0828 18:24:10.740924   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.740934   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:10.740942   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:10.741058   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:10.776125   77396 cri.go:89] found id: ""
	I0828 18:24:10.776155   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.776167   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:10.776174   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:10.776234   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:10.814024   77396 cri.go:89] found id: ""
	I0828 18:24:10.814053   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.814062   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:10.814068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:10.814132   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:10.851380   77396 cri.go:89] found id: ""
	I0828 18:24:10.851404   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.851412   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:10.851418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:10.851479   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:10.888162   77396 cri.go:89] found id: ""
	I0828 18:24:10.888193   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.888204   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:10.888215   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:10.888229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:10.938481   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:10.938520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:10.952841   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:10.952870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:11.020956   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:11.020982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:11.020997   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:11.101883   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:11.101920   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:13.642878   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:13.657098   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:13.657172   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:13.695651   77396 cri.go:89] found id: ""
	I0828 18:24:13.695686   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.695694   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:13.695699   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:13.695747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:13.732419   77396 cri.go:89] found id: ""
	I0828 18:24:13.732452   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.732465   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:13.732473   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:13.732523   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:13.770052   77396 cri.go:89] found id: ""
	I0828 18:24:13.770090   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.770099   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:13.770104   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:13.770157   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:13.807955   77396 cri.go:89] found id: ""
	I0828 18:24:13.807980   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.807988   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:13.807993   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:13.808045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:13.849535   77396 cri.go:89] found id: ""
	I0828 18:24:13.849559   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.849566   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:13.849571   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:13.849621   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:13.889078   77396 cri.go:89] found id: ""
	I0828 18:24:13.889105   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.889114   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:13.889122   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:13.889177   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:13.924998   77396 cri.go:89] found id: ""
	I0828 18:24:13.925030   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.925040   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:13.925046   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:13.925095   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:13.962794   77396 cri.go:89] found id: ""
	I0828 18:24:13.962824   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.962835   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:13.962843   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:13.962854   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:14.016213   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:14.016260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:14.030089   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:14.030119   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:14.101102   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:14.101121   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:14.101134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:14.179243   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:14.179283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:12.903671   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:15.401472   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:12.575572   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:14.575994   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:13.743881   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.243453   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.725412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:16.738387   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:16.738459   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:16.773934   77396 cri.go:89] found id: ""
	I0828 18:24:16.773960   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.773967   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:16.773973   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:16.774022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:16.807374   77396 cri.go:89] found id: ""
	I0828 18:24:16.807402   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.807412   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:16.807418   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:16.807468   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:16.841569   77396 cri.go:89] found id: ""
	I0828 18:24:16.841595   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.841605   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:16.841613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:16.841673   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:16.877225   77396 cri.go:89] found id: ""
	I0828 18:24:16.877247   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.877255   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:16.877261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:16.877321   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:16.911357   77396 cri.go:89] found id: ""
	I0828 18:24:16.911385   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.911395   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:16.911402   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:16.911458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:16.955061   77396 cri.go:89] found id: ""
	I0828 18:24:16.955087   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.955095   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:16.955103   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:16.955156   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:16.989851   77396 cri.go:89] found id: ""
	I0828 18:24:16.989887   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.989900   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:16.989906   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:16.989966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:17.023974   77396 cri.go:89] found id: ""
	I0828 18:24:17.024005   77396 logs.go:276] 0 containers: []
	W0828 18:24:17.024016   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:17.024024   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:17.024036   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:17.085245   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:17.085279   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:17.100181   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:17.100211   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:17.185406   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:17.185426   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:17.185437   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:17.266980   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:17.267020   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:19.808568   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:19.823365   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:19.823432   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:19.859428   77396 cri.go:89] found id: ""
	I0828 18:24:19.859451   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.859459   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:19.859464   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:19.859518   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:19.895152   77396 cri.go:89] found id: ""
	I0828 18:24:19.895176   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.895186   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:19.895202   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:19.895263   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:19.935775   77396 cri.go:89] found id: ""
	I0828 18:24:19.935806   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.935815   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:19.935828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:19.935893   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:19.969484   77396 cri.go:89] found id: ""
	I0828 18:24:19.969518   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.969528   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:19.969534   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:19.969615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:20.002893   77396 cri.go:89] found id: ""
	I0828 18:24:20.002935   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.002947   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:20.002955   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:20.003041   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:20.034641   77396 cri.go:89] found id: ""
	I0828 18:24:20.034668   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.034678   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:20.034686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:20.034750   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:20.064580   77396 cri.go:89] found id: ""
	I0828 18:24:20.064609   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.064620   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:20.064627   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:20.064710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:20.109306   77396 cri.go:89] found id: ""
	I0828 18:24:20.109348   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.109360   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:20.109371   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:20.109390   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:20.160179   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:20.160213   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:20.172953   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:20.172982   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:24:17.402222   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.402389   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:17.076219   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.575317   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:18.742920   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:21.243791   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:24:20.245855   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:20.245879   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:20.245894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:20.333372   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:20.333430   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:22.870985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:22.886333   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:22.886403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:22.923248   77396 cri.go:89] found id: ""
	I0828 18:24:22.923278   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.923290   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:22.923298   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:22.923362   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:22.961720   77396 cri.go:89] found id: ""
	I0828 18:24:22.961747   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.961758   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:22.961767   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:22.961826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:22.996416   77396 cri.go:89] found id: ""
	I0828 18:24:22.996451   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.996461   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:22.996469   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:22.996534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:23.031328   77396 cri.go:89] found id: ""
	I0828 18:24:23.031354   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.031365   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:23.031373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:23.031442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:23.062790   77396 cri.go:89] found id: ""
	I0828 18:24:23.062818   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.062828   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:23.062836   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:23.062900   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:23.095783   77396 cri.go:89] found id: ""
	I0828 18:24:23.095811   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.095822   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:23.095829   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:23.095887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:23.128950   77396 cri.go:89] found id: ""
	I0828 18:24:23.128976   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.128984   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:23.128989   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:23.129035   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:23.161040   77396 cri.go:89] found id: ""
	I0828 18:24:23.161070   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.161081   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:23.161093   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:23.161109   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:23.209200   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:23.209232   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:23.222326   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:23.222369   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:23.294157   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:23.294223   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:23.294235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:23.371364   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:23.371399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:21.902165   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.902593   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:22.075187   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:24.076034   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.743186   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.245507   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.248023   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:25.911853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:25.924909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:25.925042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:25.958257   77396 cri.go:89] found id: ""
	I0828 18:24:25.958286   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.958294   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:25.958300   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:25.958380   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:25.991284   77396 cri.go:89] found id: ""
	I0828 18:24:25.991312   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.991320   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:25.991325   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:25.991373   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:26.023932   77396 cri.go:89] found id: ""
	I0828 18:24:26.023963   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.023974   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:26.023981   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:26.024042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:26.055233   77396 cri.go:89] found id: ""
	I0828 18:24:26.055264   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.055274   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:26.055282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:26.055342   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:26.091307   77396 cri.go:89] found id: ""
	I0828 18:24:26.091334   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.091345   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:26.091353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:26.091403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:26.123887   77396 cri.go:89] found id: ""
	I0828 18:24:26.123919   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.123929   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:26.123943   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:26.124004   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:26.156028   77396 cri.go:89] found id: ""
	I0828 18:24:26.156055   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.156063   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:26.156068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:26.156129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:26.186952   77396 cri.go:89] found id: ""
	I0828 18:24:26.186981   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.186989   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:26.186998   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:26.187008   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:26.234021   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:26.234065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:26.249052   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:26.249079   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:26.323382   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:26.323406   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:26.323421   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:26.408279   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:26.408306   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:28.950242   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:28.964886   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:28.964973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:28.999657   77396 cri.go:89] found id: ""
	I0828 18:24:28.999686   77396 logs.go:276] 0 containers: []
	W0828 18:24:28.999695   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:28.999701   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:28.999759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:29.036649   77396 cri.go:89] found id: ""
	I0828 18:24:29.036682   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.036691   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:29.036697   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:29.036758   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:29.071048   77396 cri.go:89] found id: ""
	I0828 18:24:29.071073   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.071083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:29.071090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:29.071149   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:29.106377   77396 cri.go:89] found id: ""
	I0828 18:24:29.106412   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.106423   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:29.106430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:29.106494   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:29.141150   77396 cri.go:89] found id: ""
	I0828 18:24:29.141183   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.141192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:29.141198   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:29.141261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:29.175977   77396 cri.go:89] found id: ""
	I0828 18:24:29.176007   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.176015   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:29.176022   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:29.176085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:29.209684   77396 cri.go:89] found id: ""
	I0828 18:24:29.209714   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.209725   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:29.209732   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:29.209791   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:29.244105   77396 cri.go:89] found id: ""
	I0828 18:24:29.244133   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.244143   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:29.244153   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:29.244168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:29.304288   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:29.304326   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:29.319606   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:29.319636   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:29.389101   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:29.389123   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:29.389135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:29.474129   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:29.474168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:26.401494   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.402117   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.402503   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.574724   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.575806   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:31.075079   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.743295   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.743355   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.018867   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:32.032399   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:32.032467   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:32.066994   77396 cri.go:89] found id: ""
	I0828 18:24:32.067023   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.067032   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:32.067038   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:32.067094   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:32.102133   77396 cri.go:89] found id: ""
	I0828 18:24:32.102164   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.102176   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:32.102183   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:32.102237   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:32.136427   77396 cri.go:89] found id: ""
	I0828 18:24:32.136450   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.136457   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:32.136463   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:32.136514   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.169993   77396 cri.go:89] found id: ""
	I0828 18:24:32.170026   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.170034   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:32.170040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:32.170114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:32.202191   77396 cri.go:89] found id: ""
	I0828 18:24:32.202218   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.202229   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:32.202236   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:32.202297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:32.241866   77396 cri.go:89] found id: ""
	I0828 18:24:32.241890   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.241900   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:32.241908   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:32.241980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:32.275919   77396 cri.go:89] found id: ""
	I0828 18:24:32.275949   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.275965   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:32.275972   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:32.276033   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:32.310958   77396 cri.go:89] found id: ""
	I0828 18:24:32.310991   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.311002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:32.311010   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:32.311023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:32.367619   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:32.367665   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:32.380676   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:32.380707   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:32.445626   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:32.445650   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:32.445668   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:32.528458   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:32.528493   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:35.070182   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:35.084599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:35.084707   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:35.120542   77396 cri.go:89] found id: ""
	I0828 18:24:35.120568   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.120578   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:35.120585   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:35.120644   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:35.159336   77396 cri.go:89] found id: ""
	I0828 18:24:35.159361   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.159372   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:35.159378   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:35.159445   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:35.197161   77396 cri.go:89] found id: ""
	I0828 18:24:35.197185   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.197196   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:35.197203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:35.197267   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.903836   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.401184   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:33.574441   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.574602   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.244147   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.744307   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.233507   77396 cri.go:89] found id: ""
	I0828 18:24:35.233533   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.233542   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:35.233548   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:35.233609   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:35.270403   77396 cri.go:89] found id: ""
	I0828 18:24:35.270440   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.270448   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:35.270454   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:35.270503   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:35.304119   77396 cri.go:89] found id: ""
	I0828 18:24:35.304141   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.304149   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:35.304155   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:35.304223   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:35.341477   77396 cri.go:89] found id: ""
	I0828 18:24:35.341507   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.341518   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:35.341525   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:35.341589   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:35.374180   77396 cri.go:89] found id: ""
	I0828 18:24:35.374207   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.374215   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:35.374224   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:35.374235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:35.428008   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:35.428041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:35.443131   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:35.443159   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:35.515296   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:35.515318   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:35.515332   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:35.590734   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:35.590765   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.129856   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:38.143354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:38.143413   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:38.174964   77396 cri.go:89] found id: ""
	I0828 18:24:38.174993   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.175004   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:38.175011   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:38.175083   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:38.211424   77396 cri.go:89] found id: ""
	I0828 18:24:38.211460   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.211471   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:38.211477   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:38.211533   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:38.244667   77396 cri.go:89] found id: ""
	I0828 18:24:38.244697   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.244712   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:38.244719   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:38.244779   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:38.277930   77396 cri.go:89] found id: ""
	I0828 18:24:38.277955   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.277963   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:38.277969   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:38.278020   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:38.311374   77396 cri.go:89] found id: ""
	I0828 18:24:38.311403   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.311413   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:38.311420   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:38.311477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:38.345467   77396 cri.go:89] found id: ""
	I0828 18:24:38.345496   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.345507   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:38.345515   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:38.345576   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:38.377554   77396 cri.go:89] found id: ""
	I0828 18:24:38.377584   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.377595   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:38.377613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:38.377675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:38.410101   77396 cri.go:89] found id: ""
	I0828 18:24:38.410132   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.410142   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:38.410151   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:38.410165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:38.422496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:38.422523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:38.486692   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:38.486715   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:38.486728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:38.567295   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:38.567331   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.605787   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:38.605820   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:37.402128   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.902663   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.574935   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.575447   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:40.243971   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.743768   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:41.159454   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:41.172776   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:41.172845   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:41.205430   77396 cri.go:89] found id: ""
	I0828 18:24:41.205459   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.205470   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:41.205477   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:41.205541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:41.238941   77396 cri.go:89] found id: ""
	I0828 18:24:41.238968   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.238978   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:41.238985   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:41.239047   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:41.276056   77396 cri.go:89] found id: ""
	I0828 18:24:41.276079   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.276086   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:41.276092   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:41.276140   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:41.309018   77396 cri.go:89] found id: ""
	I0828 18:24:41.309043   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.309051   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:41.309057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:41.309103   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:41.343279   77396 cri.go:89] found id: ""
	I0828 18:24:41.343301   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.343309   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:41.343314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:41.343360   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:41.376723   77396 cri.go:89] found id: ""
	I0828 18:24:41.376749   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.376756   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:41.376762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:41.376811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:41.411996   77396 cri.go:89] found id: ""
	I0828 18:24:41.412023   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.412034   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:41.412040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:41.412091   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:41.445988   77396 cri.go:89] found id: ""
	I0828 18:24:41.446016   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.446026   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:41.446037   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:41.446053   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:41.498760   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:41.498799   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:41.512383   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:41.512413   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:41.582469   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:41.582493   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:41.582506   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:41.658801   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:41.658836   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.195154   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:44.207904   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:44.207978   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:44.241620   77396 cri.go:89] found id: ""
	I0828 18:24:44.241649   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.241659   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:44.241667   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:44.241726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:44.277206   77396 cri.go:89] found id: ""
	I0828 18:24:44.277238   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.277248   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:44.277254   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:44.277313   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:44.314367   77396 cri.go:89] found id: ""
	I0828 18:24:44.314397   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.314407   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:44.314415   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:44.314473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:44.356384   77396 cri.go:89] found id: ""
	I0828 18:24:44.356417   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.356429   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:44.356436   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:44.356499   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:44.388781   77396 cri.go:89] found id: ""
	I0828 18:24:44.388804   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.388812   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:44.388818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:44.388864   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:44.422896   77396 cri.go:89] found id: ""
	I0828 18:24:44.422927   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.422939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:44.422946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:44.423000   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:44.457218   77396 cri.go:89] found id: ""
	I0828 18:24:44.457242   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.457250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:44.457256   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:44.457315   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:44.489819   77396 cri.go:89] found id: ""
	I0828 18:24:44.489846   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.489854   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:44.489874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:44.489886   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.526759   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:44.526789   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:44.578813   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:44.578844   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:44.592066   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:44.592105   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:44.655504   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:44.655528   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:44.655547   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:42.401964   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.901869   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.076081   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.576010   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:45.242907   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.244400   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.240915   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:47.253259   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:47.253324   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:47.287911   77396 cri.go:89] found id: ""
	I0828 18:24:47.287939   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.287950   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:47.287958   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:47.288017   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:47.319834   77396 cri.go:89] found id: ""
	I0828 18:24:47.319863   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.319871   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:47.319877   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:47.319947   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:47.356339   77396 cri.go:89] found id: ""
	I0828 18:24:47.356370   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.356395   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:47.356403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:47.356481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:47.388621   77396 cri.go:89] found id: ""
	I0828 18:24:47.388646   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.388656   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:47.388663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:47.388713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:47.422495   77396 cri.go:89] found id: ""
	I0828 18:24:47.422527   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.422537   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:47.422545   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:47.422614   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:47.458799   77396 cri.go:89] found id: ""
	I0828 18:24:47.458825   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.458833   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:47.458839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:47.458885   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:47.496184   77396 cri.go:89] found id: ""
	I0828 18:24:47.496215   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.496226   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:47.496233   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:47.496286   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:47.536283   77396 cri.go:89] found id: ""
	I0828 18:24:47.536311   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.536322   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:47.536333   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:47.536347   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:47.588024   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:47.588056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:47.600661   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:47.600727   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:47.669096   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:47.669124   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:47.669139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:47.753696   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:47.753725   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:46.902404   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.402357   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:46.576078   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.075244   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.744421   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:52.243878   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:50.293600   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:50.306623   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:50.306715   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:50.340416   77396 cri.go:89] found id: ""
	I0828 18:24:50.340448   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.340460   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:50.340468   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:50.340534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:50.375812   77396 cri.go:89] found id: ""
	I0828 18:24:50.375843   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.375854   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:50.375861   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:50.375924   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:50.414399   77396 cri.go:89] found id: ""
	I0828 18:24:50.414426   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.414435   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:50.414444   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:50.414512   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:50.451285   77396 cri.go:89] found id: ""
	I0828 18:24:50.451316   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.451328   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:50.451336   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:50.451404   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:50.487828   77396 cri.go:89] found id: ""
	I0828 18:24:50.487852   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.487863   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:50.487871   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:50.487929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:50.520989   77396 cri.go:89] found id: ""
	I0828 18:24:50.521015   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.521023   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:50.521028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:50.521086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:50.553231   77396 cri.go:89] found id: ""
	I0828 18:24:50.553262   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.553271   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:50.553277   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:50.553332   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:50.588612   77396 cri.go:89] found id: ""
	I0828 18:24:50.588644   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.588654   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:50.588663   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:50.588674   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:50.642018   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:50.642065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:50.655887   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:50.655918   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:50.721935   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:50.721964   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:50.721980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:50.802009   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:50.802049   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:53.344650   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:53.357952   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:53.358011   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:53.393369   77396 cri.go:89] found id: ""
	I0828 18:24:53.393399   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.393408   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:53.393413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:53.393475   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:53.425918   77396 cri.go:89] found id: ""
	I0828 18:24:53.425947   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.425958   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:53.425965   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:53.426018   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:53.461827   77396 cri.go:89] found id: ""
	I0828 18:24:53.461857   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.461867   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:53.461874   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:53.461966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:53.494323   77396 cri.go:89] found id: ""
	I0828 18:24:53.494353   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.494363   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:53.494370   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:53.494430   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:53.531687   77396 cri.go:89] found id: ""
	I0828 18:24:53.531715   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.531726   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:53.531733   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:53.531789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:53.565794   77396 cri.go:89] found id: ""
	I0828 18:24:53.565819   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.565829   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:53.565838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:53.565894   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:53.601666   77396 cri.go:89] found id: ""
	I0828 18:24:53.601699   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.601710   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:53.601717   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:53.601782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:53.641268   77396 cri.go:89] found id: ""
	I0828 18:24:53.641302   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.641315   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:53.641332   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:53.641363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:53.695496   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:53.695532   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:53.708691   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:53.708722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:53.779280   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:53.779307   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:53.779320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:53.859258   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:53.859295   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:51.402746   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.403126   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:51.575165   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.575930   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:55.576188   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:54.243984   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.743976   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.403005   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:56.416305   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:56.416376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:56.448916   77396 cri.go:89] found id: ""
	I0828 18:24:56.448944   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.448955   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:56.448962   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:56.449022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:56.483870   77396 cri.go:89] found id: ""
	I0828 18:24:56.483897   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.483905   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:56.483910   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:56.483970   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:56.516615   77396 cri.go:89] found id: ""
	I0828 18:24:56.516642   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.516649   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:56.516655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:56.516712   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:56.551561   77396 cri.go:89] found id: ""
	I0828 18:24:56.551584   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.551591   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:56.551599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:56.551668   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:56.586089   77396 cri.go:89] found id: ""
	I0828 18:24:56.586120   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.586130   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:56.586138   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:56.586197   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:56.617988   77396 cri.go:89] found id: ""
	I0828 18:24:56.618018   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.618028   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:56.618034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:56.618111   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:56.664493   77396 cri.go:89] found id: ""
	I0828 18:24:56.664526   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.664535   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:56.664540   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:56.664601   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:56.698191   77396 cri.go:89] found id: ""
	I0828 18:24:56.698217   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.698228   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:56.698237   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:56.698251   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:56.747197   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:56.747225   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:56.760236   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:56.760262   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:56.831931   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:56.831955   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:56.831969   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:56.908578   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:56.908621   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:59.450148   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:59.464476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:59.464548   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:59.500934   77396 cri.go:89] found id: ""
	I0828 18:24:59.500956   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.500965   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:59.500970   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:59.501019   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:59.532711   77396 cri.go:89] found id: ""
	I0828 18:24:59.532740   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.532747   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:59.532753   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:59.532802   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:59.564974   77396 cri.go:89] found id: ""
	I0828 18:24:59.565001   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.565009   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:59.565016   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:59.565073   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:59.597924   77396 cri.go:89] found id: ""
	I0828 18:24:59.597957   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.597967   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:59.597975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:59.598030   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:59.630179   77396 cri.go:89] found id: ""
	I0828 18:24:59.630207   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.630216   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:59.630222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:59.630279   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:59.664755   77396 cri.go:89] found id: ""
	I0828 18:24:59.664783   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.664793   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:59.664800   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:59.664860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:59.701556   77396 cri.go:89] found id: ""
	I0828 18:24:59.701581   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.701590   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:59.701596   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:59.701646   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:59.733387   77396 cri.go:89] found id: ""
	I0828 18:24:59.733422   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.733430   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:59.733439   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:59.733450   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:59.780962   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:59.780994   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:59.795998   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:59.796034   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:59.864864   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:59.864886   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:59.864902   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:59.941914   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:59.941957   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:55.901611   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:57.902218   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.902364   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:58.076387   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:00.575268   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.243885   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:01.742980   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.480133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:02.492804   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:02.492863   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:02.525573   77396 cri.go:89] found id: ""
	I0828 18:25:02.525600   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.525609   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:02.525614   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:02.525675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:02.558640   77396 cri.go:89] found id: ""
	I0828 18:25:02.558670   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.558680   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:02.558687   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:02.558746   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:02.598803   77396 cri.go:89] found id: ""
	I0828 18:25:02.598838   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.598851   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:02.598860   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:02.598931   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:02.634067   77396 cri.go:89] found id: ""
	I0828 18:25:02.634110   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.634121   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:02.634128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:02.634188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:02.671495   77396 cri.go:89] found id: ""
	I0828 18:25:02.671520   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.671529   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:02.671536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:02.671595   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:02.704478   77396 cri.go:89] found id: ""
	I0828 18:25:02.704510   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.704522   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:02.704530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:02.704591   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:02.736799   77396 cri.go:89] found id: ""
	I0828 18:25:02.736831   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.736840   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:02.736846   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:02.736905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:02.770820   77396 cri.go:89] found id: ""
	I0828 18:25:02.770846   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.770856   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:02.770866   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:02.770885   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:02.848618   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:02.848645   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:02.848662   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:02.924704   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:02.924738   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:02.960776   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:02.960811   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:03.011600   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:03.011645   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:02.402547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:04.903615   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.576294   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.075828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:03.743629   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.744476   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:08.243316   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.527662   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:05.540652   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:05.540737   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:05.574620   77396 cri.go:89] found id: ""
	I0828 18:25:05.574650   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.574660   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:05.574668   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:05.574729   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:05.607594   77396 cri.go:89] found id: ""
	I0828 18:25:05.607621   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.607629   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:05.607634   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:05.607691   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:05.650792   77396 cri.go:89] found id: ""
	I0828 18:25:05.650823   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.650833   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:05.650841   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:05.650909   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:05.684453   77396 cri.go:89] found id: ""
	I0828 18:25:05.684481   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.684492   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:05.684499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:05.684564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:05.717875   77396 cri.go:89] found id: ""
	I0828 18:25:05.717904   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.717914   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:05.717921   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:05.717980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:05.754114   77396 cri.go:89] found id: ""
	I0828 18:25:05.754143   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.754155   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:05.754163   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:05.754220   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:05.786354   77396 cri.go:89] found id: ""
	I0828 18:25:05.786399   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.786411   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:05.786418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:05.786473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:05.818108   77396 cri.go:89] found id: ""
	I0828 18:25:05.818134   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.818141   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:05.818149   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:05.818164   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:05.868731   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:05.868762   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:05.882333   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:05.882360   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:05.951978   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:05.952003   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:05.952015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:06.028537   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:06.028573   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:08.567011   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:08.580607   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:08.580675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:08.613821   77396 cri.go:89] found id: ""
	I0828 18:25:08.613847   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.613858   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:08.613865   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:08.613929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:08.648994   77396 cri.go:89] found id: ""
	I0828 18:25:08.649021   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.649030   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:08.649036   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:08.649084   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:08.680804   77396 cri.go:89] found id: ""
	I0828 18:25:08.680829   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.680837   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:08.680844   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:08.680903   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:08.717926   77396 cri.go:89] found id: ""
	I0828 18:25:08.717962   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.717973   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:08.717980   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:08.718043   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:08.751928   77396 cri.go:89] found id: ""
	I0828 18:25:08.751957   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.751967   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:08.751975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:08.752037   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:08.791400   77396 cri.go:89] found id: ""
	I0828 18:25:08.791423   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.791432   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:08.791437   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:08.791497   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:08.828072   77396 cri.go:89] found id: ""
	I0828 18:25:08.828106   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.828118   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:08.828125   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:08.828190   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:08.881175   77396 cri.go:89] found id: ""
	I0828 18:25:08.881204   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.881216   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:08.881226   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:08.881241   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:08.970432   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:08.970469   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:09.006975   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:09.007002   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:09.059881   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:09.059919   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:09.073543   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:09.073567   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:09.143468   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:07.403012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.901414   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:07.075904   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.077674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:10.244567   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:12.742811   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.644356   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:11.657229   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:11.657297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:11.695036   77396 cri.go:89] found id: ""
	I0828 18:25:11.695059   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.695067   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:11.695073   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:11.695123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:11.726524   77396 cri.go:89] found id: ""
	I0828 18:25:11.726548   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.726556   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:11.726561   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:11.726608   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:11.759249   77396 cri.go:89] found id: ""
	I0828 18:25:11.759278   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.759289   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:11.759296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:11.759356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:11.794109   77396 cri.go:89] found id: ""
	I0828 18:25:11.794154   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.794163   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:11.794169   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:11.794221   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:11.828378   77396 cri.go:89] found id: ""
	I0828 18:25:11.828403   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.828411   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:11.828416   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:11.828470   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:11.864009   77396 cri.go:89] found id: ""
	I0828 18:25:11.864035   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.864043   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:11.864049   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:11.864108   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:11.895844   77396 cri.go:89] found id: ""
	I0828 18:25:11.895870   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.895878   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:11.895883   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:11.895932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:11.932149   77396 cri.go:89] found id: ""
	I0828 18:25:11.932180   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.932190   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:11.932208   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:11.932222   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:11.982478   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:11.982514   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:11.995466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:11.995498   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:12.058507   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:12.058531   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:12.058546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:12.138225   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:12.138260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:14.675970   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:14.688744   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:14.688811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:14.720771   77396 cri.go:89] found id: ""
	I0828 18:25:14.720795   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.720803   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:14.720808   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:14.720855   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:14.754047   77396 cri.go:89] found id: ""
	I0828 18:25:14.754071   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.754095   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:14.754103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:14.754159   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:14.789214   77396 cri.go:89] found id: ""
	I0828 18:25:14.789244   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.789256   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:14.789263   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:14.789331   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:14.822366   77396 cri.go:89] found id: ""
	I0828 18:25:14.822399   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.822411   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:14.822419   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:14.822489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:14.855905   77396 cri.go:89] found id: ""
	I0828 18:25:14.855932   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.855942   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:14.855949   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:14.856007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:14.889492   77396 cri.go:89] found id: ""
	I0828 18:25:14.889519   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.889529   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:14.889536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:14.889594   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:14.923892   77396 cri.go:89] found id: ""
	I0828 18:25:14.923921   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.923932   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:14.923940   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:14.923998   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:14.954979   77396 cri.go:89] found id: ""
	I0828 18:25:14.955002   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.955009   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:14.955017   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:14.955029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:15.006233   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:15.006266   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:15.019702   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:15.019729   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:15.090916   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:15.090943   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:15.090959   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:15.166150   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:15.166190   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:11.902996   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.402539   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.574819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:13.575405   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:16.074386   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.743486   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.243491   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.703473   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:17.716353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:17.716440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:17.750334   77396 cri.go:89] found id: ""
	I0828 18:25:17.750367   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.750376   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:17.750382   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:17.750440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:17.783429   77396 cri.go:89] found id: ""
	I0828 18:25:17.783475   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.783488   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:17.783496   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:17.783561   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:17.819014   77396 cri.go:89] found id: ""
	I0828 18:25:17.819041   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.819052   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:17.819060   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:17.819118   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:17.856138   77396 cri.go:89] found id: ""
	I0828 18:25:17.856168   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.856179   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:17.856186   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:17.856248   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:17.891579   77396 cri.go:89] found id: ""
	I0828 18:25:17.891611   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.891619   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:17.891626   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:17.891687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:17.924709   77396 cri.go:89] found id: ""
	I0828 18:25:17.924771   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.924798   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:17.924808   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:17.924874   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:17.955875   77396 cri.go:89] found id: ""
	I0828 18:25:17.955903   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.955913   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:17.955920   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:17.955977   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:17.993827   77396 cri.go:89] found id: ""
	I0828 18:25:17.993861   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.993872   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:17.993882   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:17.993897   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:18.046501   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:18.046534   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:18.060008   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:18.060040   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:18.128546   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:18.128567   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:18.128582   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:18.204859   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:18.204896   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:16.901986   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.902594   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.076564   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.575785   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:19.243545   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:21.244384   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.745360   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:20.759428   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:20.759511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:20.794748   77396 cri.go:89] found id: ""
	I0828 18:25:20.794780   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.794789   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:20.794794   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:20.794843   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:20.834595   77396 cri.go:89] found id: ""
	I0828 18:25:20.834623   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.834636   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:20.834642   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:20.834720   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:20.870609   77396 cri.go:89] found id: ""
	I0828 18:25:20.870636   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.870646   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:20.870653   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:20.870710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:20.903739   77396 cri.go:89] found id: ""
	I0828 18:25:20.903764   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.903774   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:20.903782   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:20.903841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:20.937331   77396 cri.go:89] found id: ""
	I0828 18:25:20.937360   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.937367   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:20.937373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:20.937424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:20.971140   77396 cri.go:89] found id: ""
	I0828 18:25:20.971169   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.971178   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:20.971184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:20.971231   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:21.002714   77396 cri.go:89] found id: ""
	I0828 18:25:21.002743   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.002753   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:21.002761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:21.002833   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:21.034802   77396 cri.go:89] found id: ""
	I0828 18:25:21.034827   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.034837   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:21.034848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:21.034862   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:21.091088   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:21.091128   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:21.103535   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:21.103569   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:21.177175   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:21.177202   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:21.177217   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:21.257125   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:21.257161   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:23.797074   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:23.810097   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:23.810171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:23.843943   77396 cri.go:89] found id: ""
	I0828 18:25:23.843972   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.843984   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:23.843991   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:23.844054   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:23.879872   77396 cri.go:89] found id: ""
	I0828 18:25:23.879906   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.879918   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:23.879926   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:23.879985   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:23.914109   77396 cri.go:89] found id: ""
	I0828 18:25:23.914136   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.914145   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:23.914153   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:23.914200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:23.952672   77396 cri.go:89] found id: ""
	I0828 18:25:23.952700   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.952708   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:23.952714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:23.952759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:23.986813   77396 cri.go:89] found id: ""
	I0828 18:25:23.986839   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.986855   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:23.986861   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:23.986917   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:24.019358   77396 cri.go:89] found id: ""
	I0828 18:25:24.019387   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.019396   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:24.019413   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:24.019487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:24.053389   77396 cri.go:89] found id: ""
	I0828 18:25:24.053415   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.053423   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:24.053429   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:24.053477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:24.086618   77396 cri.go:89] found id: ""
	I0828 18:25:24.086652   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.086660   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:24.086667   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:24.086677   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:24.136243   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:24.136277   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:24.150031   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:24.150071   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:24.229689   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:24.229729   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:24.229746   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:24.307152   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:24.307197   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:20.902691   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.401748   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:22.575828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.075159   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.743296   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.743656   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.243947   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:26.844828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:26.858915   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:26.858989   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:26.896094   77396 cri.go:89] found id: ""
	I0828 18:25:26.896123   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.896132   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:26.896138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:26.896187   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:26.934896   77396 cri.go:89] found id: ""
	I0828 18:25:26.934925   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.934936   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:26.934944   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:26.935007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:26.967673   77396 cri.go:89] found id: ""
	I0828 18:25:26.967700   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.967708   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:26.967714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:26.967780   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:27.000095   77396 cri.go:89] found id: ""
	I0828 18:25:27.000124   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.000133   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:27.000140   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:27.000192   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:27.038158   77396 cri.go:89] found id: ""
	I0828 18:25:27.038186   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.038195   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:27.038201   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:27.038253   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:27.073606   77396 cri.go:89] found id: ""
	I0828 18:25:27.073634   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.073649   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:27.073657   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:27.073713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:27.105139   77396 cri.go:89] found id: ""
	I0828 18:25:27.105163   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.105176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:27.105182   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:27.105235   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:27.137985   77396 cri.go:89] found id: ""
	I0828 18:25:27.138014   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.138025   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:27.138036   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:27.138055   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:27.187983   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:27.188018   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:27.200260   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:27.200286   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:27.273005   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:27.273026   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:27.273038   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:27.353333   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:27.353375   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:29.890515   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:29.903924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:29.903994   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:29.936189   77396 cri.go:89] found id: ""
	I0828 18:25:29.936221   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.936231   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:29.936240   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:29.936354   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:29.968319   77396 cri.go:89] found id: ""
	I0828 18:25:29.968349   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.968359   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:29.968366   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:29.968436   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:30.001331   77396 cri.go:89] found id: ""
	I0828 18:25:30.001358   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.001383   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:30.001391   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:30.001477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:30.035610   77396 cri.go:89] found id: ""
	I0828 18:25:30.035634   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.035642   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:30.035648   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:30.035695   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:30.067304   77396 cri.go:89] found id: ""
	I0828 18:25:30.067335   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.067346   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:30.067354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:30.067429   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:30.105020   77396 cri.go:89] found id: ""
	I0828 18:25:30.105049   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.105057   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:30.105063   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:30.105126   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:30.142048   77396 cri.go:89] found id: ""
	I0828 18:25:30.142097   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.142110   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:30.142117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:30.142180   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:30.173099   77396 cri.go:89] found id: ""
	I0828 18:25:30.173131   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.173140   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:30.173149   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:30.173166   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:25:25.901875   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.401339   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.402248   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:27.076181   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:29.575216   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.743526   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:33.242940   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:25:30.238946   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:30.238968   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:30.238980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:30.320484   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:30.320523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:30.360028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:30.360056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:30.412663   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:30.412697   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:32.927100   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:32.940555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:32.940636   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:32.973182   77396 cri.go:89] found id: ""
	I0828 18:25:32.973221   77396 logs.go:276] 0 containers: []
	W0828 18:25:32.973233   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:32.973242   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:32.973303   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:33.006096   77396 cri.go:89] found id: ""
	I0828 18:25:33.006125   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.006134   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:33.006139   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:33.006191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:33.038430   77396 cri.go:89] found id: ""
	I0828 18:25:33.038461   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.038472   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:33.038480   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:33.038542   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:33.070266   77396 cri.go:89] found id: ""
	I0828 18:25:33.070294   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.070303   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:33.070315   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:33.070375   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:33.105248   77396 cri.go:89] found id: ""
	I0828 18:25:33.105278   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.105289   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:33.105296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:33.105356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:33.136507   77396 cri.go:89] found id: ""
	I0828 18:25:33.136540   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.136551   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:33.136559   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:33.136618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:33.167333   77396 cri.go:89] found id: ""
	I0828 18:25:33.167359   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.167370   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:33.167377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:33.167442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:33.201302   77396 cri.go:89] found id: ""
	I0828 18:25:33.201331   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.201343   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:33.201352   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:33.201364   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:33.213335   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:33.213361   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:33.278269   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:33.278296   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:33.278310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:33.357015   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:33.357048   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:33.401463   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:33.401495   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:32.402583   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.402749   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:32.075671   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.575951   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.743215   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.243081   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.952911   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:35.965925   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:35.965990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:36.001656   77396 cri.go:89] found id: ""
	I0828 18:25:36.001693   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.001705   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:36.001713   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:36.001784   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:36.035010   77396 cri.go:89] found id: ""
	I0828 18:25:36.035037   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.035045   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:36.035050   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:36.035099   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:36.069113   77396 cri.go:89] found id: ""
	I0828 18:25:36.069148   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.069158   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:36.069164   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:36.069219   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:36.106200   77396 cri.go:89] found id: ""
	I0828 18:25:36.106230   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.106240   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:36.106248   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:36.106316   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:36.138428   77396 cri.go:89] found id: ""
	I0828 18:25:36.138457   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.138468   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:36.138475   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:36.138559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:36.170084   77396 cri.go:89] found id: ""
	I0828 18:25:36.170112   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.170122   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:36.170128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:36.170188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:36.202180   77396 cri.go:89] found id: ""
	I0828 18:25:36.202205   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.202215   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:36.202222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:36.202285   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:36.236125   77396 cri.go:89] found id: ""
	I0828 18:25:36.236156   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.236167   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:36.236179   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:36.236193   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:36.274230   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:36.274256   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:36.325505   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:36.325546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:36.338714   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:36.338741   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:36.406404   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:36.406432   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:36.406448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:38.981942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:38.995287   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:38.995357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:39.028250   77396 cri.go:89] found id: ""
	I0828 18:25:39.028275   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.028282   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:39.028289   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:39.028335   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:39.061402   77396 cri.go:89] found id: ""
	I0828 18:25:39.061434   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.061444   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:39.061449   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:39.061501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:39.095672   77396 cri.go:89] found id: ""
	I0828 18:25:39.095704   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.095716   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:39.095729   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:39.095789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:39.130135   77396 cri.go:89] found id: ""
	I0828 18:25:39.130162   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.130170   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:39.130176   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:39.130239   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:39.168529   77396 cri.go:89] found id: ""
	I0828 18:25:39.168560   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.168571   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:39.168578   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:39.168641   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:39.200786   77396 cri.go:89] found id: ""
	I0828 18:25:39.200813   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.200821   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:39.200828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:39.200876   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:39.232855   77396 cri.go:89] found id: ""
	I0828 18:25:39.232886   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.232894   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:39.232902   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:39.232966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:39.267241   77396 cri.go:89] found id: ""
	I0828 18:25:39.267273   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.267284   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:39.267294   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:39.267309   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:39.306023   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:39.306061   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:39.357880   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:39.357931   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:39.370886   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:39.370914   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:39.448130   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:39.448151   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:39.448163   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:36.403245   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.902238   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:37.075570   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:39.076792   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:40.243633   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.244395   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.027111   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:42.039611   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:42.039687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:42.078052   77396 cri.go:89] found id: ""
	I0828 18:25:42.078093   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.078104   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:42.078111   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:42.078169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:42.112812   77396 cri.go:89] found id: ""
	I0828 18:25:42.112842   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.112851   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:42.112856   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:42.112902   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:42.146846   77396 cri.go:89] found id: ""
	I0828 18:25:42.146875   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.146884   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:42.146891   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:42.146948   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:42.179311   77396 cri.go:89] found id: ""
	I0828 18:25:42.179344   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.179352   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:42.179358   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:42.179422   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:42.212149   77396 cri.go:89] found id: ""
	I0828 18:25:42.212179   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.212192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:42.212200   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:42.212254   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:42.248322   77396 cri.go:89] found id: ""
	I0828 18:25:42.248358   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.248369   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:42.248382   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:42.248496   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:42.283212   77396 cri.go:89] found id: ""
	I0828 18:25:42.283241   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.283250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:42.283257   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:42.283318   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:42.327064   77396 cri.go:89] found id: ""
	I0828 18:25:42.327099   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.327110   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:42.327121   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:42.327135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:42.378545   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:42.378577   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:42.392020   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:42.392045   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:42.464531   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:42.464553   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:42.464564   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:42.543116   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:42.543162   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:45.083935   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:45.096434   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:45.096501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:45.130059   77396 cri.go:89] found id: ""
	I0828 18:25:45.130098   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.130110   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:45.130117   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:45.130176   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:45.160982   77396 cri.go:89] found id: ""
	I0828 18:25:45.161011   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.161021   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:45.161028   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:45.161086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:45.191416   77396 cri.go:89] found id: ""
	I0828 18:25:45.191449   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.191460   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:45.191467   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:45.191524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:41.401456   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:43.401666   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.401772   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:41.575819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.075020   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.743053   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:47.242714   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.223315   77396 cri.go:89] found id: ""
	I0828 18:25:45.223344   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.223360   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:45.223368   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:45.223421   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:45.255404   77396 cri.go:89] found id: ""
	I0828 18:25:45.255428   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.255435   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:45.255441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:45.255487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:45.294671   77396 cri.go:89] found id: ""
	I0828 18:25:45.294705   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.294716   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:45.294724   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:45.294811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:45.329148   77396 cri.go:89] found id: ""
	I0828 18:25:45.329174   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.329186   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:45.329191   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:45.329249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:45.361976   77396 cri.go:89] found id: ""
	I0828 18:25:45.362007   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.362018   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:45.362028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:45.362041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:45.412495   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:45.412530   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:45.425268   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:45.425302   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:45.493451   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:45.493475   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:45.493489   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:45.571427   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:45.571472   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.108133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:48.120632   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:48.120699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:48.156933   77396 cri.go:89] found id: ""
	I0828 18:25:48.156963   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.156973   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:48.156981   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:48.157045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:48.188436   77396 cri.go:89] found id: ""
	I0828 18:25:48.188465   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.188473   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:48.188479   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:48.188524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:48.219558   77396 cri.go:89] found id: ""
	I0828 18:25:48.219588   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.219598   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:48.219605   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:48.219661   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:48.252872   77396 cri.go:89] found id: ""
	I0828 18:25:48.252901   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.252917   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:48.252923   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:48.252975   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:48.288244   77396 cri.go:89] found id: ""
	I0828 18:25:48.288273   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.288283   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:48.288291   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:48.288355   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:48.325077   77396 cri.go:89] found id: ""
	I0828 18:25:48.325114   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.325126   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:48.325134   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:48.325195   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:48.358163   77396 cri.go:89] found id: ""
	I0828 18:25:48.358191   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.358202   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:48.358210   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:48.358259   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:48.409246   77396 cri.go:89] found id: ""
	I0828 18:25:48.409277   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.409287   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:48.409299   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:48.409314   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:48.425228   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:48.425259   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:48.493169   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:48.493188   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:48.493201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:48.573486   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:48.573524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.615846   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:48.615879   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:47.901530   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.901707   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:46.574662   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:48.575614   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.075530   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.244444   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.744518   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.165546   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:51.178743   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:51.178807   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:51.214299   77396 cri.go:89] found id: ""
	I0828 18:25:51.214329   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.214340   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:51.214349   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:51.214426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:51.247057   77396 cri.go:89] found id: ""
	I0828 18:25:51.247086   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.247096   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:51.247103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:51.247174   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:51.279381   77396 cri.go:89] found id: ""
	I0828 18:25:51.279413   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.279423   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:51.279430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:51.279492   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:51.314237   77396 cri.go:89] found id: ""
	I0828 18:25:51.314266   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.314277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:51.314286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:51.314352   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:51.347496   77396 cri.go:89] found id: ""
	I0828 18:25:51.347518   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.347526   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:51.347532   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:51.347578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:51.381705   77396 cri.go:89] found id: ""
	I0828 18:25:51.381742   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.381753   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:51.381762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:51.381816   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:51.413157   77396 cri.go:89] found id: ""
	I0828 18:25:51.413186   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.413196   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:51.413203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:51.413261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:51.443228   77396 cri.go:89] found id: ""
	I0828 18:25:51.443251   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.443266   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:51.443274   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:51.443287   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:51.490927   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:51.490961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:51.505308   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:51.505334   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:51.572077   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:51.572109   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:51.572125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:51.658398   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:51.658441   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:54.199638   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:54.213449   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:54.213525   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:54.249698   77396 cri.go:89] found id: ""
	I0828 18:25:54.249720   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.249727   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:54.249733   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:54.249782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:54.285235   77396 cri.go:89] found id: ""
	I0828 18:25:54.285267   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.285279   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:54.285287   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:54.285344   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:54.322535   77396 cri.go:89] found id: ""
	I0828 18:25:54.322562   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.322571   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:54.322577   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:54.322640   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:54.357995   77396 cri.go:89] found id: ""
	I0828 18:25:54.358025   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.358036   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:54.358045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:54.358129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:54.391112   77396 cri.go:89] found id: ""
	I0828 18:25:54.391137   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.391145   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:54.391150   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:54.391213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:54.424248   77396 cri.go:89] found id: ""
	I0828 18:25:54.424278   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.424288   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:54.424295   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:54.424357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:54.456529   77396 cri.go:89] found id: ""
	I0828 18:25:54.456553   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.456561   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:54.456566   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:54.456619   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:54.489226   77396 cri.go:89] found id: ""
	I0828 18:25:54.489251   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.489259   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:54.489268   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:54.489283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:54.544282   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:54.544318   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:54.557511   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:54.557549   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:54.631057   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:54.631081   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:54.631096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:54.711874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:54.711910   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:51.902237   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.402216   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:53.076058   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:55.577768   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.244062   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:56.244857   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:57.251826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:57.264806   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:57.264872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:57.300005   77396 cri.go:89] found id: ""
	I0828 18:25:57.300031   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.300041   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:57.300049   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:57.300128   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:57.333070   77396 cri.go:89] found id: ""
	I0828 18:25:57.333099   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.333110   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:57.333117   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:57.333181   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:57.369343   77396 cri.go:89] found id: ""
	I0828 18:25:57.369372   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.369390   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:57.369398   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:57.369462   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:57.401729   77396 cri.go:89] found id: ""
	I0828 18:25:57.401756   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.401764   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:57.401770   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:57.401824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:57.432890   77396 cri.go:89] found id: ""
	I0828 18:25:57.432914   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.432921   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:57.432927   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:57.432973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:57.467572   77396 cri.go:89] found id: ""
	I0828 18:25:57.467596   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.467604   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:57.467609   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:57.467663   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:57.500316   77396 cri.go:89] found id: ""
	I0828 18:25:57.500344   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.500351   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:57.500357   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:57.500411   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:57.531676   77396 cri.go:89] found id: ""
	I0828 18:25:57.531700   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.531708   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:57.531716   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:57.531728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:57.604613   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:57.604639   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:57.604653   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:57.684622   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:57.684658   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:57.720566   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:57.720656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:57.770832   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:57.770866   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:56.902012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:59.402189   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.075045   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.575328   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.743586   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.743675   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:01.737703   76435 pod_ready.go:82] duration metric: took 4m0.000480749s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:01.737748   76435 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0828 18:26:01.737772   76435 pod_ready.go:39] duration metric: took 4m13.763880094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:01.737804   76435 kubeadm.go:597] duration metric: took 4m22.607627094s to restartPrimaryControlPlane
	W0828 18:26:01.737875   76435 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:01.737908   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:00.283493   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:00.296500   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:00.296578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:00.334395   77396 cri.go:89] found id: ""
	I0828 18:26:00.334420   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.334428   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:00.334434   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:00.334481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:00.369178   77396 cri.go:89] found id: ""
	I0828 18:26:00.369205   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.369214   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:00.369219   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:00.369283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:00.405962   77396 cri.go:89] found id: ""
	I0828 18:26:00.405990   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.406000   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:00.406007   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:00.406064   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:00.438684   77396 cri.go:89] found id: ""
	I0828 18:26:00.438717   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.438728   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:00.438735   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:00.438795   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:00.472357   77396 cri.go:89] found id: ""
	I0828 18:26:00.472385   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.472397   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:00.472403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:00.472450   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:00.506891   77396 cri.go:89] found id: ""
	I0828 18:26:00.506920   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.506931   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:00.506938   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:00.506999   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:00.546387   77396 cri.go:89] found id: ""
	I0828 18:26:00.546413   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.546422   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:00.546427   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:00.546474   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:00.598714   77396 cri.go:89] found id: ""
	I0828 18:26:00.598745   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.598753   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:00.598761   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:00.598779   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:00.617100   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:00.617130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:00.687317   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:00.687348   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:00.687363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:00.770097   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:00.770130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:00.815848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:00.815883   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:03.365469   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:03.379117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:03.379182   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:03.414122   77396 cri.go:89] found id: ""
	I0828 18:26:03.414148   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.414155   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:03.414161   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:03.414208   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:03.446953   77396 cri.go:89] found id: ""
	I0828 18:26:03.446975   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.446983   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:03.446988   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:03.447036   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:03.481034   77396 cri.go:89] found id: ""
	I0828 18:26:03.481059   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.481067   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:03.481072   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:03.481120   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:03.514785   77396 cri.go:89] found id: ""
	I0828 18:26:03.514814   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.514824   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:03.514832   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:03.514888   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:03.548302   77396 cri.go:89] found id: ""
	I0828 18:26:03.548330   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.548340   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:03.548348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:03.548423   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:03.582430   77396 cri.go:89] found id: ""
	I0828 18:26:03.582460   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.582469   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:03.582476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:03.582529   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:03.615108   77396 cri.go:89] found id: ""
	I0828 18:26:03.615136   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.615144   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:03.615149   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:03.615205   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:03.647282   77396 cri.go:89] found id: ""
	I0828 18:26:03.647312   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.647321   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:03.647330   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:03.647340   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:03.660466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:03.660500   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:03.732746   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:03.732767   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:03.732780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:03.811286   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:03.811320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:03.848482   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:03.848513   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:01.402393   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.402670   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.403016   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.075650   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.574825   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:06.400122   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:06.412839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:06.412908   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:06.448570   77396 cri.go:89] found id: ""
	I0828 18:26:06.448597   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.448608   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:06.448620   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:06.448687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:06.482446   77396 cri.go:89] found id: ""
	I0828 18:26:06.482476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.482487   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:06.482495   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:06.482555   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:06.514640   77396 cri.go:89] found id: ""
	I0828 18:26:06.514669   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.514679   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:06.514686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:06.514747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:06.548997   77396 cri.go:89] found id: ""
	I0828 18:26:06.549020   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.549028   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:06.549034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:06.549079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:06.583557   77396 cri.go:89] found id: ""
	I0828 18:26:06.583582   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.583589   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:06.583595   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:06.583665   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:06.617447   77396 cri.go:89] found id: ""
	I0828 18:26:06.617476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.617484   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:06.617490   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:06.617549   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:06.650387   77396 cri.go:89] found id: ""
	I0828 18:26:06.650419   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.650427   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:06.650433   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:06.650489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:06.682851   77396 cri.go:89] found id: ""
	I0828 18:26:06.682879   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.682888   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:06.682899   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:06.682961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:06.695365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:06.695392   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:06.760214   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:06.760245   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:06.760261   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:06.839827   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:06.839863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:06.877298   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:06.877325   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.430694   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:09.443043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:09.443115   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:09.476557   77396 cri.go:89] found id: ""
	I0828 18:26:09.476583   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.476594   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:09.476602   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:09.476659   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:09.514909   77396 cri.go:89] found id: ""
	I0828 18:26:09.514935   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.514943   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:09.514948   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:09.515009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:09.549769   77396 cri.go:89] found id: ""
	I0828 18:26:09.549800   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.549810   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:09.549818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:09.549868   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:09.582793   77396 cri.go:89] found id: ""
	I0828 18:26:09.582821   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.582831   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:09.582838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:09.582896   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:09.615603   77396 cri.go:89] found id: ""
	I0828 18:26:09.615636   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.615648   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:09.615655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:09.615716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:09.650046   77396 cri.go:89] found id: ""
	I0828 18:26:09.650087   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.650100   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:09.650108   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:09.650161   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:09.681726   77396 cri.go:89] found id: ""
	I0828 18:26:09.681754   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.681763   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:09.681768   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:09.681821   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:09.713008   77396 cri.go:89] found id: ""
	I0828 18:26:09.713036   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.713045   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:09.713054   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:09.713065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:09.792720   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:09.792757   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:09.831752   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:09.831785   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.880877   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:09.880913   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:09.896178   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:09.896215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:09.962282   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:07.901074   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:09.905185   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:08.074185   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:10.075331   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.462957   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:12.475266   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:12.475345   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:12.508364   77396 cri.go:89] found id: ""
	I0828 18:26:12.508394   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.508405   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:12.508413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:12.508472   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:12.544152   77396 cri.go:89] found id: ""
	I0828 18:26:12.544185   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.544197   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:12.544204   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:12.544264   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:12.578358   77396 cri.go:89] found id: ""
	I0828 18:26:12.578384   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.578394   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:12.578403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:12.578456   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:12.609183   77396 cri.go:89] found id: ""
	I0828 18:26:12.609206   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.609214   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:12.609219   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:12.609292   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:12.641791   77396 cri.go:89] found id: ""
	I0828 18:26:12.641816   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.641824   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:12.641830   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:12.641887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:12.673857   77396 cri.go:89] found id: ""
	I0828 18:26:12.673881   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.673889   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:12.673894   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:12.673938   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:12.709501   77396 cri.go:89] found id: ""
	I0828 18:26:12.709525   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.709532   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:12.709538   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:12.709585   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:12.742972   77396 cri.go:89] found id: ""
	I0828 18:26:12.742994   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.743002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:12.743010   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:12.743026   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:12.813949   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:12.813969   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:12.813980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:12.894829   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:12.894873   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:12.939533   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:12.939565   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:12.990319   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:12.990358   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:12.404061   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:14.902346   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.575908   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.075489   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.503923   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:15.518161   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:15.518240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:15.564145   77396 cri.go:89] found id: ""
	I0828 18:26:15.564173   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.564182   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:15.564189   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:15.564249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:15.600654   77396 cri.go:89] found id: ""
	I0828 18:26:15.600682   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.600692   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:15.600699   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:15.600760   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:15.633089   77396 cri.go:89] found id: ""
	I0828 18:26:15.633122   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.633131   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:15.633137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:15.633186   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:15.667339   77396 cri.go:89] found id: ""
	I0828 18:26:15.667370   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.667382   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:15.667389   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:15.667451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:15.699463   77396 cri.go:89] found id: ""
	I0828 18:26:15.699499   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.699508   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:15.699513   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:15.699573   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:15.735841   77396 cri.go:89] found id: ""
	I0828 18:26:15.735866   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.735873   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:15.735879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:15.735929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:15.771111   77396 cri.go:89] found id: ""
	I0828 18:26:15.771135   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.771142   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:15.771148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:15.771198   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:15.804845   77396 cri.go:89] found id: ""
	I0828 18:26:15.804868   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.804875   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:15.804884   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:15.804894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:15.856744   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:15.856780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:15.869496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:15.869520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:15.938957   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:15.938982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:15.938998   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:16.016482   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:16.016525   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:18.554851   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:18.568241   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.568317   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.601401   77396 cri.go:89] found id: ""
	I0828 18:26:18.601439   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.601448   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:18.601454   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.601511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.634784   77396 cri.go:89] found id: ""
	I0828 18:26:18.634809   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.634816   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:18.634822   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.634875   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:18.666540   77396 cri.go:89] found id: ""
	I0828 18:26:18.666572   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.666584   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:18.666591   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:18.666643   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:18.699180   77396 cri.go:89] found id: ""
	I0828 18:26:18.699210   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.699221   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:18.699228   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:18.699289   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:18.735001   77396 cri.go:89] found id: ""
	I0828 18:26:18.735032   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.735042   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:18.735050   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:18.735116   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:18.767404   77396 cri.go:89] found id: ""
	I0828 18:26:18.767441   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.767454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:18.767472   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:18.767537   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:18.798857   77396 cri.go:89] found id: ""
	I0828 18:26:18.798881   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.798890   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:18.798896   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:18.798942   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:18.830113   77396 cri.go:89] found id: ""
	I0828 18:26:18.830137   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.830145   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:18.830153   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:18.830165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:18.843161   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:18.843188   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:18.910736   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:18.910760   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:18.910775   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:18.991698   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:18.991734   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.038896   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.038929   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:17.402193   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:18.902692   76486 pod_ready.go:82] duration metric: took 4m0.007006782s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:18.902716   76486 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:26:18.902724   76486 pod_ready.go:39] duration metric: took 4m4.058254547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:18.902739   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:18.902762   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.902819   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.954071   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:18.954115   76486 cri.go:89] found id: ""
	I0828 18:26:18.954123   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:18.954183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.958270   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.958345   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.994068   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:18.994105   76486 cri.go:89] found id: ""
	I0828 18:26:18.994116   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:18.994173   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.998807   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.998881   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:19.050622   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:19.050649   76486 cri.go:89] found id: ""
	I0828 18:26:19.050657   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:19.050738   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.055283   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:19.055340   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:19.093254   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.093280   76486 cri.go:89] found id: ""
	I0828 18:26:19.093288   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:19.093341   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.097062   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:19.097118   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:19.135962   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.135989   76486 cri.go:89] found id: ""
	I0828 18:26:19.135999   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:19.136046   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.140440   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:19.140510   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:19.176913   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.176942   76486 cri.go:89] found id: ""
	I0828 18:26:19.176951   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:19.177007   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.180742   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:19.180794   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:19.218796   76486 cri.go:89] found id: ""
	I0828 18:26:19.218821   76486 logs.go:276] 0 containers: []
	W0828 18:26:19.218832   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:19.218839   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:19.218898   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:19.253110   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:19.253134   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.253140   76486 cri.go:89] found id: ""
	I0828 18:26:19.253148   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:19.253205   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.257338   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.261148   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:19.261173   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.299620   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:19.299659   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.337533   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:19.337560   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:19.836298   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:19.836350   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.881132   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:19.881168   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.921986   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:19.922023   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.975419   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.975455   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:20.045848   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:20.045895   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:20.059683   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:20.059715   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:20.186442   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:20.186472   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:20.233152   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:20.233187   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:20.278546   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:20.278575   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:20.325985   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:20.326015   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:17.075945   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:19.076890   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:21.590663   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:21.602796   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:21.602860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:21.635583   77396 cri.go:89] found id: ""
	I0828 18:26:21.635612   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.635623   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:21.635631   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:21.635699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:21.666982   77396 cri.go:89] found id: ""
	I0828 18:26:21.667023   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.667034   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:21.667041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:21.667098   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:21.698817   77396 cri.go:89] found id: ""
	I0828 18:26:21.698851   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.698862   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:21.698870   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:21.698925   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:21.729618   77396 cri.go:89] found id: ""
	I0828 18:26:21.729645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.729654   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:21.729660   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:21.729718   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:21.763188   77396 cri.go:89] found id: ""
	I0828 18:26:21.763214   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.763222   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:21.763227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:21.763272   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:21.795613   77396 cri.go:89] found id: ""
	I0828 18:26:21.795645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.795656   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:21.795663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:21.795716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:21.828271   77396 cri.go:89] found id: ""
	I0828 18:26:21.828299   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.828308   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:21.828314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:21.828358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:21.860098   77396 cri.go:89] found id: ""
	I0828 18:26:21.860124   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.860132   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:21.860141   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:21.860155   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:21.908269   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:21.908308   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:21.921123   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:21.921149   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:21.985059   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:21.985078   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:21.985091   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:22.065705   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:22.065745   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:24.608061   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:24.621768   77396 kubeadm.go:597] duration metric: took 4m4.233964466s to restartPrimaryControlPlane
	W0828 18:26:24.621838   77396 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:24.621863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:22.860616   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:22.877760   76486 api_server.go:72] duration metric: took 4m15.760769788s to wait for apiserver process to appear ...
	I0828 18:26:22.877790   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:22.877829   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:22.877891   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:22.924739   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:22.924763   76486 cri.go:89] found id: ""
	I0828 18:26:22.924772   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:22.924831   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.928747   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:22.928810   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:22.967171   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:22.967193   76486 cri.go:89] found id: ""
	I0828 18:26:22.967200   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:22.967247   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.970989   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:22.971048   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:23.004804   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.004830   76486 cri.go:89] found id: ""
	I0828 18:26:23.004839   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:23.004895   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.008551   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:23.008616   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:23.041475   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.041496   76486 cri.go:89] found id: ""
	I0828 18:26:23.041504   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:23.041562   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.045265   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:23.045321   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:23.078749   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.078772   76486 cri.go:89] found id: ""
	I0828 18:26:23.078781   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:23.078827   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.082647   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:23.082712   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:23.117104   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.117128   76486 cri.go:89] found id: ""
	I0828 18:26:23.117138   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:23.117196   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.121011   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:23.121066   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:23.154564   76486 cri.go:89] found id: ""
	I0828 18:26:23.154592   76486 logs.go:276] 0 containers: []
	W0828 18:26:23.154614   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:23.154626   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:23.154689   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:23.192082   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.192101   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.192106   76486 cri.go:89] found id: ""
	I0828 18:26:23.192114   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:23.192175   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.196183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.199786   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:23.199814   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:23.241986   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:23.242019   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.276718   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:23.276750   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:23.353187   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:23.353224   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:23.366901   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:23.366937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.403147   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:23.403181   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.440461   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:23.440491   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.476039   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:23.476067   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.524702   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:23.524743   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.558484   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:23.558510   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:23.994897   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:23.994933   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:24.091558   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:24.091591   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:24.133767   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:24.133801   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:21.575113   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:23.576760   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:26.075770   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:27.939212   76435 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.201267084s)
	I0828 18:26:27.939337   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:27.964796   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:27.978456   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:27.988580   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:27.988599   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:27.988640   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.008900   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.008955   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.020342   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.032723   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.032784   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.049205   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.058740   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.058803   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.067969   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.078089   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.078145   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.086950   76435 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.136931   76435 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 18:26:28.137117   76435 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:26:28.249761   76435 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:26:28.249900   76435 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:26:28.250020   76435 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 18:26:28.258994   76435 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:26:28.261527   76435 out.go:235]   - Generating certificates and keys ...
	I0828 18:26:28.261644   76435 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:26:28.261732   76435 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:26:28.261848   76435 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:26:28.261939   76435 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:26:28.262038   76435 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:26:28.262155   76435 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:26:28.262254   76435 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:26:28.262338   76435 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:26:28.262452   76435 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:26:28.262557   76435 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:26:28.262635   76435 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:26:28.262731   76435 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:26:28.434898   76435 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:26:28.833039   76435 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 18:26:28.930840   76435 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:26:29.103123   76435 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:26:29.201561   76435 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:26:29.202039   76435 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:26:29.204545   76435 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:26:28.691092   77396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.069202982s)
	I0828 18:26:28.691158   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:28.705352   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:28.715421   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:28.724698   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:28.724718   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:28.724771   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.733594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.733676   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.742759   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.752127   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.752187   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.761279   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.770451   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.770518   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.779635   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.788337   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.788405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.797794   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.997476   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:26.682052   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:26:26.687081   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:26:26.687992   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:26.688008   76486 api_server.go:131] duration metric: took 3.810212378s to wait for apiserver health ...
	I0828 18:26:26.688016   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:26.688038   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:26.688084   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:26.729049   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:26.729072   76486 cri.go:89] found id: ""
	I0828 18:26:26.729080   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:26.729127   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.733643   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:26.733710   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:26.774655   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:26.774675   76486 cri.go:89] found id: ""
	I0828 18:26:26.774682   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:26.774732   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.778654   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:26.778704   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:26.812844   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:26.812870   76486 cri.go:89] found id: ""
	I0828 18:26:26.812878   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:26.812928   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.816783   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:26.816847   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:26.856925   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:26.856945   76486 cri.go:89] found id: ""
	I0828 18:26:26.856957   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:26.857013   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.860845   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:26.860906   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:26.893850   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:26.893873   76486 cri.go:89] found id: ""
	I0828 18:26:26.893882   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:26.893940   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.897799   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:26.897875   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:26.932914   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:26.932936   76486 cri.go:89] found id: ""
	I0828 18:26:26.932942   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:26.932993   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.937185   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:26.937253   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:26.980339   76486 cri.go:89] found id: ""
	I0828 18:26:26.980368   76486 logs.go:276] 0 containers: []
	W0828 18:26:26.980379   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:26.980386   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:26.980458   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:27.014870   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.014889   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.014893   76486 cri.go:89] found id: ""
	I0828 18:26:27.014899   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:27.014954   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.018782   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.022146   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:27.022167   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:27.062244   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:27.062271   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:27.097495   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:27.097528   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:27.150300   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:27.150342   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.183651   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:27.183680   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.217641   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:27.217666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:27.286627   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:27.286666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:27.300486   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:27.300514   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:27.409150   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:27.409183   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:27.791378   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:27.791425   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:27.842764   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:27.842799   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:27.892361   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:27.892393   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:27.926469   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:27.926497   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:30.478530   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:26:30.478568   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.478576   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.478583   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.478589   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.478595   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.478608   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.478619   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.478627   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.478637   76486 system_pods.go:74] duration metric: took 3.79061533s to wait for pod list to return data ...
	I0828 18:26:30.478648   76486 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:30.482479   76486 default_sa.go:45] found service account: "default"
	I0828 18:26:30.482507   76486 default_sa.go:55] duration metric: took 3.852493ms for default service account to be created ...
	I0828 18:26:30.482517   76486 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:30.488974   76486 system_pods.go:86] 8 kube-system pods found
	I0828 18:26:30.489014   76486 system_pods.go:89] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.489023   76486 system_pods.go:89] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.489030   76486 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.489038   76486 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.489044   76486 system_pods.go:89] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.489050   76486 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.489062   76486 system_pods.go:89] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.489069   76486 system_pods.go:89] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.489092   76486 system_pods.go:126] duration metric: took 6.568035ms to wait for k8s-apps to be running ...
	I0828 18:26:30.489104   76486 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:30.489163   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:30.508336   76486 system_svc.go:56] duration metric: took 19.222473ms WaitForService to wait for kubelet
	I0828 18:26:30.508369   76486 kubeadm.go:582] duration metric: took 4m23.39138334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:30.508394   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:30.512219   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:30.512253   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:30.512267   76486 node_conditions.go:105] duration metric: took 3.866556ms to run NodePressure ...
	I0828 18:26:30.512282   76486 start.go:241] waiting for startup goroutines ...
	I0828 18:26:30.512291   76486 start.go:246] waiting for cluster config update ...
	I0828 18:26:30.512306   76486 start.go:255] writing updated cluster config ...
	I0828 18:26:30.512681   76486 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:30.579402   76486 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:30.581444   76486 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-640552" cluster and "default" namespace by default
	I0828 18:26:28.575075   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:30.576207   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:29.206147   76435 out.go:235]   - Booting up control plane ...
	I0828 18:26:29.206257   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:26:29.206365   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:26:29.206494   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:26:29.227031   76435 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:26:29.235149   76435 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:26:29.235246   76435 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:26:29.370272   76435 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 18:26:29.370462   76435 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 18:26:29.872896   76435 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733105ms
	I0828 18:26:29.872975   76435 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 18:26:34.877604   76435 kubeadm.go:310] [api-check] The API server is healthy after 5.002276684s
	I0828 18:26:34.892462   76435 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 18:26:34.905804   76435 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 18:26:34.932862   76435 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 18:26:34.933079   76435 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-014980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 18:26:34.944560   76435 kubeadm.go:310] [bootstrap-token] Using token: nwgkdo.9yj47woyyi233z66
	I0828 18:26:34.945933   76435 out.go:235]   - Configuring RBAC rules ...
	I0828 18:26:34.946052   76435 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 18:26:34.951430   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 18:26:34.963862   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 18:26:34.968038   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 18:26:34.971350   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 18:26:34.977521   76435 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 18:26:35.282249   76435 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 18:26:35.704101   76435 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 18:26:36.282971   76435 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 18:26:36.284216   76435 kubeadm.go:310] 
	I0828 18:26:36.284337   76435 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 18:26:36.284364   76435 kubeadm.go:310] 
	I0828 18:26:36.284457   76435 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 18:26:36.284470   76435 kubeadm.go:310] 
	I0828 18:26:36.284504   76435 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 18:26:36.284579   76435 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 18:26:36.284654   76435 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 18:26:36.284667   76435 kubeadm.go:310] 
	I0828 18:26:36.284748   76435 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 18:26:36.284758   76435 kubeadm.go:310] 
	I0828 18:26:36.284820   76435 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 18:26:36.284826   76435 kubeadm.go:310] 
	I0828 18:26:36.284891   76435 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 18:26:36.284988   76435 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 18:26:36.285081   76435 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 18:26:36.285091   76435 kubeadm.go:310] 
	I0828 18:26:36.285197   76435 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 18:26:36.285298   76435 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 18:26:36.285309   76435 kubeadm.go:310] 
	I0828 18:26:36.285414   76435 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285549   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 18:26:36.285572   76435 kubeadm.go:310] 	--control-plane 
	I0828 18:26:36.285577   76435 kubeadm.go:310] 
	I0828 18:26:36.285655   76435 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 18:26:36.285663   76435 kubeadm.go:310] 
	I0828 18:26:36.285757   76435 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285886   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 18:26:36.287195   76435 kubeadm.go:310] W0828 18:26:28.113155    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287529   76435 kubeadm.go:310] W0828 18:26:28.114038    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287633   76435 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:36.287659   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:26:36.287669   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:26:36.289019   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:26:33.075886   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:35.076651   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:36.290213   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:26:36.302171   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:26:36.326384   76435 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:26:36.326452   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:36.326522   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-014980 minikube.k8s.io/updated_at=2024_08_28T18_26_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=embed-certs-014980 minikube.k8s.io/primary=true
	I0828 18:26:36.537331   76435 ops.go:34] apiserver oom_adj: -16
	I0828 18:26:36.537497   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.038467   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.537529   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.038147   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.537854   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.038193   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.538325   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.037978   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.537503   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.038001   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.160327   76435 kubeadm.go:1113] duration metric: took 4.83392727s to wait for elevateKubeSystemPrivileges
	I0828 18:26:41.160366   76435 kubeadm.go:394] duration metric: took 5m2.080700509s to StartCluster
	I0828 18:26:41.160386   76435 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.160469   76435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:26:41.162122   76435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.162393   76435 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:26:41.162463   76435 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:26:41.162547   76435 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-014980"
	I0828 18:26:41.162563   76435 addons.go:69] Setting default-storageclass=true in profile "embed-certs-014980"
	I0828 18:26:41.162588   76435 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-014980"
	I0828 18:26:41.162586   76435 addons.go:69] Setting metrics-server=true in profile "embed-certs-014980"
	W0828 18:26:41.162599   76435 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:26:41.162610   76435 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-014980"
	I0828 18:26:41.162632   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162635   76435 addons.go:234] Setting addon metrics-server=true in "embed-certs-014980"
	W0828 18:26:41.162644   76435 addons.go:243] addon metrics-server should already be in state true
	I0828 18:26:41.162666   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162612   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:26:41.163042   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163054   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163083   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163095   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163140   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163160   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.164216   76435 out.go:177] * Verifying Kubernetes components...
	I0828 18:26:41.166298   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:26:41.178807   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0828 18:26:41.178914   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0828 18:26:41.179437   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179515   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179971   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.179994   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180168   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.180197   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180346   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180629   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180982   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181021   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.181761   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181810   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.182920   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
	I0828 18:26:41.183394   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.183877   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.183900   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.184252   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.184450   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.187788   76435 addons.go:234] Setting addon default-storageclass=true in "embed-certs-014980"
	W0828 18:26:41.187811   76435 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:26:41.187837   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.188210   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.188242   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.199469   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I0828 18:26:41.199977   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.200461   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.200487   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.200894   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.201121   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.201369   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0828 18:26:41.201749   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.202224   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.202243   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.202811   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.203024   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.203030   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.205127   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.205217   76435 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:26:41.206606   76435 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.206620   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:26:41.206633   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.206678   76435 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:26:37.575308   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:39.575726   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:41.207928   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:26:41.207951   76435 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:26:41.207971   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.208651   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0828 18:26:41.209208   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.210020   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.210040   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.210477   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.210537   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211056   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211089   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211123   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211166   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211313   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.211443   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.211572   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211588   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211580   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.211600   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.211636   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.211828   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211996   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.212159   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.212271   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.228122   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
	I0828 18:26:41.228552   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.229000   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.229016   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.229309   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.229565   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.231484   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.231721   76435 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.231732   76435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:26:41.231744   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.234525   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.234901   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.234933   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.235097   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.235259   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.235412   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.235585   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.375620   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:26:41.420534   76435 node_ready.go:35] waiting up to 6m0s for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429069   76435 node_ready.go:49] node "embed-certs-014980" has status "Ready":"True"
	I0828 18:26:41.429090   76435 node_ready.go:38] duration metric: took 8.530462ms for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429098   76435 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:41.438842   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:41.484936   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.535672   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.536914   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:26:41.536936   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:26:41.604181   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:26:41.604219   76435 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:26:41.654668   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.654695   76435 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:26:41.688039   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.921155   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921188   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921465   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:41.921544   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.921568   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921577   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921842   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921863   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.938676   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.938694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.938984   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.939034   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690412   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154689373s)
	I0828 18:26:42.690461   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690469   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.690766   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.690810   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690830   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690843   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.691076   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.691114   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.691122   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.722795   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.034719218s)
	I0828 18:26:42.722840   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.722852   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723141   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.723210   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723231   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723249   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.723261   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723539   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723567   76435 addons.go:475] Verifying addon metrics-server=true in "embed-certs-014980"
	I0828 18:26:42.725524   76435 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0828 18:26:42.726507   76435 addons.go:510] duration metric: took 1.564045136s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0828 18:26:41.576259   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:44.075008   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:46.075323   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:43.445262   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:45.445672   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:47.948313   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:48.446506   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.446527   76435 pod_ready.go:82] duration metric: took 7.007660638s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.446538   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451954   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.451973   76435 pod_ready.go:82] duration metric: took 5.430099ms for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451983   76435 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456910   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.456937   76435 pod_ready.go:82] duration metric: took 4.947692ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456948   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963231   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.963252   76435 pod_ready.go:82] duration metric: took 1.506296167s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963262   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967762   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.967780   76435 pod_ready.go:82] duration metric: took 4.511839ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967788   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043820   76435 pod_ready.go:93] pod "kube-proxy-hzw4m" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.043844   76435 pod_ready.go:82] duration metric: took 76.049661ms for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043855   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443261   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.443288   76435 pod_ready.go:82] duration metric: took 399.423823ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443298   76435 pod_ready.go:39] duration metric: took 9.014190636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:50.443315   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:50.443375   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:50.459400   76435 api_server.go:72] duration metric: took 9.296966752s to wait for apiserver process to appear ...
	I0828 18:26:50.459426   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:50.459448   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:26:50.463861   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:26:50.464779   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:50.464807   76435 api_server.go:131] duration metric: took 5.370633ms to wait for apiserver health ...
	I0828 18:26:50.464817   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:50.645588   76435 system_pods.go:59] 9 kube-system pods found
	I0828 18:26:50.645620   76435 system_pods.go:61] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:50.645626   76435 system_pods.go:61] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:50.645629   76435 system_pods.go:61] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:50.645633   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:50.645636   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:50.645639   76435 system_pods.go:61] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:50.645642   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:50.645647   76435 system_pods.go:61] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:50.645651   76435 system_pods.go:61] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:50.645658   76435 system_pods.go:74] duration metric: took 180.831741ms to wait for pod list to return data ...
	I0828 18:26:50.645664   76435 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:50.844171   76435 default_sa.go:45] found service account: "default"
	I0828 18:26:50.844205   76435 default_sa.go:55] duration metric: took 198.534118ms for default service account to be created ...
	I0828 18:26:50.844217   76435 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:51.045810   76435 system_pods.go:86] 9 kube-system pods found
	I0828 18:26:51.045839   76435 system_pods.go:89] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:51.045844   76435 system_pods.go:89] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:51.045848   76435 system_pods.go:89] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:51.045852   76435 system_pods.go:89] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:51.045856   76435 system_pods.go:89] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:51.045859   76435 system_pods.go:89] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:51.045865   76435 system_pods.go:89] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:51.045871   76435 system_pods.go:89] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:51.045874   76435 system_pods.go:89] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:51.045882   76435 system_pods.go:126] duration metric: took 201.659747ms to wait for k8s-apps to be running ...
	I0828 18:26:51.045889   76435 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:51.045930   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:51.060123   76435 system_svc.go:56] duration metric: took 14.22252ms WaitForService to wait for kubelet
	I0828 18:26:51.060159   76435 kubeadm.go:582] duration metric: took 9.897729666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:51.060184   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:51.244017   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:51.244042   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:51.244052   76435 node_conditions.go:105] duration metric: took 183.862561ms to run NodePressure ...
	I0828 18:26:51.244063   76435 start.go:241] waiting for startup goroutines ...
	I0828 18:26:51.244069   76435 start.go:246] waiting for cluster config update ...
	I0828 18:26:51.244080   76435 start.go:255] writing updated cluster config ...
	I0828 18:26:51.244398   76435 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:51.291241   76435 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:51.293227   76435 out.go:177] * Done! kubectl is now configured to use "embed-certs-014980" cluster and "default" namespace by default
	I0828 18:26:48.075513   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:50.576810   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:53.075100   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:55.075381   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:57.076055   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:59.575251   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:01.575306   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:04.075576   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.076392   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.575514   75908 pod_ready.go:82] duration metric: took 4m0.006537109s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:27:06.575539   75908 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:27:06.575549   75908 pod_ready.go:39] duration metric: took 4m3.208242253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:27:06.575566   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:27:06.575596   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:06.575649   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:06.625222   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:06.625247   75908 cri.go:89] found id: ""
	I0828 18:27:06.625257   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:06.625317   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.629941   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:06.630003   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:06.665372   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:06.665400   75908 cri.go:89] found id: ""
	I0828 18:27:06.665410   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:06.665472   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.669511   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:06.669599   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:06.709706   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:06.709734   75908 cri.go:89] found id: ""
	I0828 18:27:06.709742   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:06.709801   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.713964   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:06.714023   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:06.748110   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:06.748136   75908 cri.go:89] found id: ""
	I0828 18:27:06.748158   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:06.748217   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.752020   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:06.752087   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:06.788455   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:06.788476   75908 cri.go:89] found id: ""
	I0828 18:27:06.788483   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:06.788537   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.792710   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:06.792779   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:06.830031   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:06.830055   75908 cri.go:89] found id: ""
	I0828 18:27:06.830065   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:06.830147   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.833910   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:06.833970   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:06.869172   75908 cri.go:89] found id: ""
	I0828 18:27:06.869199   75908 logs.go:276] 0 containers: []
	W0828 18:27:06.869210   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:06.869217   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:06.869281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:06.906605   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:06.906626   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:06.906632   75908 cri.go:89] found id: ""
	I0828 18:27:06.906644   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:06.906705   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.911374   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.915494   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:06.915515   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:06.961094   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:06.961128   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:07.018511   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:07.018543   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:07.058413   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:07.058443   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:07.098028   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:07.098055   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:07.136706   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:07.136731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:07.203021   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:07.203059   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:07.239714   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:07.239758   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:07.746282   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:07.746326   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:07.812731   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:07.812771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:07.828453   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:07.828484   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:07.967513   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:07.967610   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:08.013719   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:08.013745   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.553418   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:27:10.569945   75908 api_server.go:72] duration metric: took 4m14.476728398s to wait for apiserver process to appear ...
	I0828 18:27:10.569977   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:27:10.570010   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:10.570057   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:10.605869   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:10.605899   75908 cri.go:89] found id: ""
	I0828 18:27:10.605908   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:10.606013   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.609868   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:10.609949   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:10.647627   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:10.647655   75908 cri.go:89] found id: ""
	I0828 18:27:10.647664   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:10.647721   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.651916   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:10.651980   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:10.690782   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:10.690805   75908 cri.go:89] found id: ""
	I0828 18:27:10.690815   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:10.690870   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.694896   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:10.694944   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:10.735502   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:10.735530   75908 cri.go:89] found id: ""
	I0828 18:27:10.735541   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:10.735603   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.739627   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:10.739702   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:10.776213   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:10.776233   75908 cri.go:89] found id: ""
	I0828 18:27:10.776240   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:10.776293   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.779889   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:10.779948   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:10.815919   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:10.815949   75908 cri.go:89] found id: ""
	I0828 18:27:10.815958   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:10.816022   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.820317   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:10.820385   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:10.859049   75908 cri.go:89] found id: ""
	I0828 18:27:10.859077   75908 logs.go:276] 0 containers: []
	W0828 18:27:10.859085   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:10.859091   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:10.859138   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:10.894511   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.894543   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.894549   75908 cri.go:89] found id: ""
	I0828 18:27:10.894558   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:10.894616   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.899725   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.907315   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:10.907339   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.941374   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:10.941401   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:11.372069   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:11.372111   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:11.425168   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:11.425192   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:11.439748   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:11.439771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:11.484252   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:11.484278   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:11.522975   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:11.523000   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:11.590753   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:11.590797   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:11.629694   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:11.629725   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:11.667597   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:11.667627   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:11.732423   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:11.732469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:11.841885   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:11.841929   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:11.885703   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:11.885741   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.428276   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:27:14.433359   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:27:14.434430   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:27:14.434448   75908 api_server.go:131] duration metric: took 3.864464723s to wait for apiserver health ...
	I0828 18:27:14.434458   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:27:14.434487   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:14.434545   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:14.472125   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.472153   75908 cri.go:89] found id: ""
	I0828 18:27:14.472163   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:14.472225   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.476217   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:14.476281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:14.514886   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:14.514904   75908 cri.go:89] found id: ""
	I0828 18:27:14.514911   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:14.514965   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.518930   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:14.519000   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:14.556279   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.556302   75908 cri.go:89] found id: ""
	I0828 18:27:14.556311   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:14.556356   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.560542   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:14.560612   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:14.604981   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:14.605008   75908 cri.go:89] found id: ""
	I0828 18:27:14.605017   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:14.605076   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.608769   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:14.608833   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:14.644014   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:14.644036   75908 cri.go:89] found id: ""
	I0828 18:27:14.644044   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:14.644089   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.648138   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:14.648211   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:14.686898   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:14.686919   75908 cri.go:89] found id: ""
	I0828 18:27:14.686926   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:14.686971   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.690752   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:14.690818   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:14.723146   75908 cri.go:89] found id: ""
	I0828 18:27:14.723174   75908 logs.go:276] 0 containers: []
	W0828 18:27:14.723185   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:14.723200   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:14.723264   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:14.758168   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.758196   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:14.758202   75908 cri.go:89] found id: ""
	I0828 18:27:14.758212   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:14.758269   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.761928   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.765388   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:14.765407   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.798567   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:14.798598   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:14.841992   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:14.842024   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:14.947020   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:14.947050   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.996788   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:14.996815   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:15.031706   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:15.031731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:15.065813   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:15.065839   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:15.121439   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:15.121469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:15.535661   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:15.535709   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:15.603334   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:15.603374   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:15.619628   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:15.619657   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:15.661179   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:15.661203   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:15.697954   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:15.697983   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:18.238105   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:27:18.238137   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.238144   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.238149   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.238154   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.238158   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.238163   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.238171   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.238177   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.238187   75908 system_pods.go:74] duration metric: took 3.803722719s to wait for pod list to return data ...
	I0828 18:27:18.238198   75908 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:27:18.240936   75908 default_sa.go:45] found service account: "default"
	I0828 18:27:18.240955   75908 default_sa.go:55] duration metric: took 2.749733ms for default service account to be created ...
	I0828 18:27:18.240963   75908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:27:18.245768   75908 system_pods.go:86] 8 kube-system pods found
	I0828 18:27:18.245793   75908 system_pods.go:89] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.245800   75908 system_pods.go:89] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.245806   75908 system_pods.go:89] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.245810   75908 system_pods.go:89] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.245815   75908 system_pods.go:89] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.245820   75908 system_pods.go:89] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.245829   75908 system_pods.go:89] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.245838   75908 system_pods.go:89] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.245851   75908 system_pods.go:126] duration metric: took 4.881291ms to wait for k8s-apps to be running ...
	I0828 18:27:18.245862   75908 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:27:18.245909   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:27:18.260429   75908 system_svc.go:56] duration metric: took 14.56108ms WaitForService to wait for kubelet
	I0828 18:27:18.260458   75908 kubeadm.go:582] duration metric: took 4m22.167245383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:27:18.260489   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:27:18.262765   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:27:18.262784   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:27:18.262793   75908 node_conditions.go:105] duration metric: took 2.299468ms to run NodePressure ...
	I0828 18:27:18.262803   75908 start.go:241] waiting for startup goroutines ...
	I0828 18:27:18.262810   75908 start.go:246] waiting for cluster config update ...
	I0828 18:27:18.262820   75908 start.go:255] writing updated cluster config ...
	I0828 18:27:18.263070   75908 ssh_runner.go:195] Run: rm -f paused
	I0828 18:27:18.312755   75908 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:27:18.314827   75908 out.go:177] * Done! kubectl is now configured to use "no-preload-072854" cluster and "default" namespace by default
	I0828 18:28:25.556329   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:28:25.556449   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:28:25.558031   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:28:25.558117   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:28:25.558222   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:28:25.558363   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:28:25.558517   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:28:25.558594   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:28:25.561046   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:28:25.561124   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:28:25.561179   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:28:25.561288   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:28:25.561384   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:28:25.561489   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:28:25.561562   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:28:25.561797   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:28:25.561914   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:28:25.562010   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:28:25.562230   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:28:25.562294   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:28:25.562402   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:28:25.562478   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:28:25.562554   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:28:25.562706   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:28:25.562818   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:28:25.562926   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:28:25.563006   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:28:25.563043   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:28:25.563144   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:28:25.564527   77396 out.go:235]   - Booting up control plane ...
	I0828 18:28:25.564629   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:28:25.564716   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:28:25.564816   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:28:25.564929   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:28:25.565154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:28:25.565226   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:28:25.565326   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565541   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.565660   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565895   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566002   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566184   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566245   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566411   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566473   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566629   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566636   77396 kubeadm.go:310] 
	I0828 18:28:25.566672   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:28:25.566706   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:28:25.566712   77396 kubeadm.go:310] 
	I0828 18:28:25.566740   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:28:25.566769   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:28:25.566881   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:28:25.566893   77396 kubeadm.go:310] 
	I0828 18:28:25.567033   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:28:25.567080   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:28:25.567126   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:28:25.567142   77396 kubeadm.go:310] 
	I0828 18:28:25.567276   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:28:25.567351   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:28:25.567358   77396 kubeadm.go:310] 
	I0828 18:28:25.567461   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:28:25.567534   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:28:25.567612   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:28:25.567689   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:28:25.567726   77396 kubeadm.go:310] 
	W0828 18:28:25.567820   77396 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0828 18:28:25.567858   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:28:26.036779   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:28:26.051771   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:28:26.060912   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:28:26.060932   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:28:26.060971   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:28:26.069420   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:28:26.069486   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:28:26.078268   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:28:26.086594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:28:26.086669   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:28:26.095756   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.104747   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:28:26.104809   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.113847   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:28:26.122600   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:28:26.122673   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:28:26.131697   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:28:26.338828   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:30:22.315132   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:30:22.315271   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:30:22.316887   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:30:22.316970   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:30:22.317067   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:30:22.317199   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:30:22.317289   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:30:22.317340   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:30:22.319318   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:30:22.319406   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:30:22.319461   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:30:22.319540   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:30:22.319620   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:30:22.319715   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:30:22.319791   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:30:22.319888   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:30:22.319972   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:30:22.320068   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:30:22.320161   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:30:22.320232   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:30:22.320312   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:30:22.320362   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:30:22.320411   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:30:22.320468   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:30:22.320511   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:30:22.320627   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:30:22.320748   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:30:22.320805   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:30:22.320922   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:30:22.322522   77396 out.go:235]   - Booting up control plane ...
	I0828 18:30:22.322640   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:30:22.322739   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:30:22.322843   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:30:22.322939   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:30:22.323154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:30:22.323234   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:30:22.323320   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323518   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323616   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323851   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323947   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324157   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324215   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324383   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324448   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324605   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324614   77396 kubeadm.go:310] 
	I0828 18:30:22.324651   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:30:22.324685   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:30:22.324694   77396 kubeadm.go:310] 
	I0828 18:30:22.324726   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:30:22.324755   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:30:22.324846   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:30:22.324853   77396 kubeadm.go:310] 
	I0828 18:30:22.324939   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:30:22.324971   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:30:22.325003   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:30:22.325009   77396 kubeadm.go:310] 
	I0828 18:30:22.325137   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:30:22.325259   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:30:22.325271   77396 kubeadm.go:310] 
	I0828 18:30:22.325394   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:30:22.325485   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:30:22.325599   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:30:22.325707   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:30:22.325725   77396 kubeadm.go:310] 
	I0828 18:30:22.325793   77396 kubeadm.go:394] duration metric: took 8m1.985321645s to StartCluster
	I0828 18:30:22.325845   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:30:22.325912   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:30:22.369637   77396 cri.go:89] found id: ""
	I0828 18:30:22.369669   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.369680   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:30:22.369687   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:30:22.369748   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:30:22.404363   77396 cri.go:89] found id: ""
	I0828 18:30:22.404395   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.404404   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:30:22.404412   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:30:22.404477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:30:22.439923   77396 cri.go:89] found id: ""
	I0828 18:30:22.439949   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.439956   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:30:22.439962   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:30:22.440016   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:30:22.480139   77396 cri.go:89] found id: ""
	I0828 18:30:22.480169   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.480186   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:30:22.480195   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:30:22.480255   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:30:22.517020   77396 cri.go:89] found id: ""
	I0828 18:30:22.517053   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.517064   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:30:22.517075   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:30:22.517151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:30:22.551369   77396 cri.go:89] found id: ""
	I0828 18:30:22.551391   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.551399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:30:22.551409   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:30:22.551458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:30:22.585656   77396 cri.go:89] found id: ""
	I0828 18:30:22.585686   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.585697   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:30:22.585704   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:30:22.585781   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:30:22.620157   77396 cri.go:89] found id: ""
	I0828 18:30:22.620190   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.620201   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:30:22.620212   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:30:22.620230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:30:22.634209   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:30:22.634245   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:30:22.711047   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:30:22.711082   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:30:22.711096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:30:22.816037   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:30:22.816075   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:30:22.885999   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:30:22.886029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 18:30:22.936793   77396 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0828 18:30:22.936856   77396 out.go:270] * 
	W0828 18:30:22.936920   77396 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.936941   77396 out.go:270] * 
	W0828 18:30:22.937749   77396 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:30:22.941026   77396 out.go:201] 
	W0828 18:30:22.942189   77396 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.942300   77396 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0828 18:30:22.942335   77396 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0828 18:30:22.943829   77396 out.go:201] 
	
	
	==> CRI-O <==
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.313357787Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870153313336342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89446597-348e-49d0-b0b3-e82a438ec645 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.313835097Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f25e65ea-bb2c-4b3b-ab88-cc2cb4af5426 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.313884429Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f25e65ea-bb2c-4b3b-ab88-cc2cb4af5426 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.314077176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03453e02aa9963c458c74885933fadfdd4a6e0e674b401d0361eb3fdddaa3f7a,PodSandboxId:b36f0b3836447f5dfa26944f9a6b103e7d8ddce00e71ea4c0a99ee66f18ad845,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869603101176033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e09413-b695-420e-bf45-1f8f40ff7d05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e333800301f7f5da0b122571c15d46079b6c961b10e512cf03cf8bf22d3cb8c1,PodSandboxId:61e4b445685f3aadbb896849a4708aab7b1c419cdabe2087303eedc82d6718c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602024719344,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cz29x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd89ac5c-011e-4810-b681-fae999af2b6b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba7772bf1d640f5f4d72968ec1a1116f54245ab8b771d59b998c205ec11cb27,PodSandboxId:c3e3e300f9423a39398404417b02e02644ab23c0261df4a3ae93b58bd5496836,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602025001342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-djjbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
c3e4fc9-c257-40c5-bee2-6ad7335e8bf8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8726422898139a9becd3740f00b52f0615c668e67a93e3afcd345f69ab56174,PodSandboxId:3ce46a9948a7b50a9b5612fdb22b9467b2c2a2c1ee2a4af3b317da6a834d5f43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724869601331066351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzw4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46e7805-0395-40ae-92e6-ab43eb4b2b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4e7e2bb458e745c253692660330618c1fee23d63e5a177ac89d5371d6b6a87,PodSandboxId:4b4e3ea46ba40410cffa36c45bba7e65f368b981d6ec0267d239398307a0c7aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869590515966426,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be8df2afc75f1ee8c35748a5ed7b7b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2026b2021e7a1ef7de3865619a82c63953a5be5296464d79ccfb3ab1ac6a17,PodSandboxId:0057f243a3f39819998caf62a3c9331f5e175e2daa0d5aa1f52841f33e9a4541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869590513114880,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bca478b04e382b536c96c7dc6610af,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c7c0076722fa61714fa39f3164f13b568bcc53d70189f01180f8d897be448d,PodSandboxId:33f16483fba1945d5606cfdf50b4e677dca862c716421ce82d155dfad4756a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869590438408124,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dfa18e75e434576541ce5bd5918f485703f75fd26a44c32607e7389b3cf8ef,PodSandboxId:61d32464e573483e9c6f06a3357c9bf30043da9e4deea79b0b3a91823bf816db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869590420885095,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3105e9c370c576e3c2b7f7033575b471,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfe0e43ef655ecb760c941414f1de49e1cd57e156eaf03cfbc503bc80719eba,PodSandboxId:da2c894e5dfb6f28b71b5efec41389df398a03747668ff9c19f0f8e8231fd1cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724869302441758358,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f25e65ea-bb2c-4b3b-ab88-cc2cb4af5426 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.349060615Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05ef69dd-a168-45ef-98a6-145f774dbf67 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.349269815Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05ef69dd-a168-45ef-98a6-145f774dbf67 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.350742124Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=acccfded-a322-4b48-a68c-6b16630becc1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.351115469Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870153351094139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=acccfded-a322-4b48-a68c-6b16630becc1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.351594534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=177d849d-45f6-41cd-9bbb-fd44820fe2ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.351647854Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=177d849d-45f6-41cd-9bbb-fd44820fe2ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.351870415Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03453e02aa9963c458c74885933fadfdd4a6e0e674b401d0361eb3fdddaa3f7a,PodSandboxId:b36f0b3836447f5dfa26944f9a6b103e7d8ddce00e71ea4c0a99ee66f18ad845,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869603101176033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e09413-b695-420e-bf45-1f8f40ff7d05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e333800301f7f5da0b122571c15d46079b6c961b10e512cf03cf8bf22d3cb8c1,PodSandboxId:61e4b445685f3aadbb896849a4708aab7b1c419cdabe2087303eedc82d6718c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602024719344,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cz29x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd89ac5c-011e-4810-b681-fae999af2b6b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba7772bf1d640f5f4d72968ec1a1116f54245ab8b771d59b998c205ec11cb27,PodSandboxId:c3e3e300f9423a39398404417b02e02644ab23c0261df4a3ae93b58bd5496836,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602025001342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-djjbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
c3e4fc9-c257-40c5-bee2-6ad7335e8bf8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8726422898139a9becd3740f00b52f0615c668e67a93e3afcd345f69ab56174,PodSandboxId:3ce46a9948a7b50a9b5612fdb22b9467b2c2a2c1ee2a4af3b317da6a834d5f43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724869601331066351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzw4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46e7805-0395-40ae-92e6-ab43eb4b2b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4e7e2bb458e745c253692660330618c1fee23d63e5a177ac89d5371d6b6a87,PodSandboxId:4b4e3ea46ba40410cffa36c45bba7e65f368b981d6ec0267d239398307a0c7aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869590515966426,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be8df2afc75f1ee8c35748a5ed7b7b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2026b2021e7a1ef7de3865619a82c63953a5be5296464d79ccfb3ab1ac6a17,PodSandboxId:0057f243a3f39819998caf62a3c9331f5e175e2daa0d5aa1f52841f33e9a4541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869590513114880,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bca478b04e382b536c96c7dc6610af,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c7c0076722fa61714fa39f3164f13b568bcc53d70189f01180f8d897be448d,PodSandboxId:33f16483fba1945d5606cfdf50b4e677dca862c716421ce82d155dfad4756a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869590438408124,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dfa18e75e434576541ce5bd5918f485703f75fd26a44c32607e7389b3cf8ef,PodSandboxId:61d32464e573483e9c6f06a3357c9bf30043da9e4deea79b0b3a91823bf816db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869590420885095,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3105e9c370c576e3c2b7f7033575b471,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfe0e43ef655ecb760c941414f1de49e1cd57e156eaf03cfbc503bc80719eba,PodSandboxId:da2c894e5dfb6f28b71b5efec41389df398a03747668ff9c19f0f8e8231fd1cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724869302441758358,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=177d849d-45f6-41cd-9bbb-fd44820fe2ab name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.387840125Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=935d5e25-e3db-47a2-bbf0-d8e74a9f545a name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.387912346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=935d5e25-e3db-47a2-bbf0-d8e74a9f545a name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.389307856Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=49792633-fca3-40aa-ab57-ba05b8f88b32 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.389948726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870153389904283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=49792633-fca3-40aa-ab57-ba05b8f88b32 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.390517960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb37795f-333c-4f04-b921-114b0db4cb17 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.390616529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb37795f-333c-4f04-b921-114b0db4cb17 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.390811297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03453e02aa9963c458c74885933fadfdd4a6e0e674b401d0361eb3fdddaa3f7a,PodSandboxId:b36f0b3836447f5dfa26944f9a6b103e7d8ddce00e71ea4c0a99ee66f18ad845,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869603101176033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e09413-b695-420e-bf45-1f8f40ff7d05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e333800301f7f5da0b122571c15d46079b6c961b10e512cf03cf8bf22d3cb8c1,PodSandboxId:61e4b445685f3aadbb896849a4708aab7b1c419cdabe2087303eedc82d6718c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602024719344,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cz29x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd89ac5c-011e-4810-b681-fae999af2b6b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba7772bf1d640f5f4d72968ec1a1116f54245ab8b771d59b998c205ec11cb27,PodSandboxId:c3e3e300f9423a39398404417b02e02644ab23c0261df4a3ae93b58bd5496836,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602025001342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-djjbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
c3e4fc9-c257-40c5-bee2-6ad7335e8bf8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8726422898139a9becd3740f00b52f0615c668e67a93e3afcd345f69ab56174,PodSandboxId:3ce46a9948a7b50a9b5612fdb22b9467b2c2a2c1ee2a4af3b317da6a834d5f43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724869601331066351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzw4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46e7805-0395-40ae-92e6-ab43eb4b2b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4e7e2bb458e745c253692660330618c1fee23d63e5a177ac89d5371d6b6a87,PodSandboxId:4b4e3ea46ba40410cffa36c45bba7e65f368b981d6ec0267d239398307a0c7aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869590515966426,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be8df2afc75f1ee8c35748a5ed7b7b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2026b2021e7a1ef7de3865619a82c63953a5be5296464d79ccfb3ab1ac6a17,PodSandboxId:0057f243a3f39819998caf62a3c9331f5e175e2daa0d5aa1f52841f33e9a4541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869590513114880,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bca478b04e382b536c96c7dc6610af,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c7c0076722fa61714fa39f3164f13b568bcc53d70189f01180f8d897be448d,PodSandboxId:33f16483fba1945d5606cfdf50b4e677dca862c716421ce82d155dfad4756a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869590438408124,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dfa18e75e434576541ce5bd5918f485703f75fd26a44c32607e7389b3cf8ef,PodSandboxId:61d32464e573483e9c6f06a3357c9bf30043da9e4deea79b0b3a91823bf816db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869590420885095,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3105e9c370c576e3c2b7f7033575b471,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfe0e43ef655ecb760c941414f1de49e1cd57e156eaf03cfbc503bc80719eba,PodSandboxId:da2c894e5dfb6f28b71b5efec41389df398a03747668ff9c19f0f8e8231fd1cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724869302441758358,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb37795f-333c-4f04-b921-114b0db4cb17 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.424631245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ee9f088-d254-4718-8608-a6831b4f0d5c name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.424718791Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ee9f088-d254-4718-8608-a6831b4f0d5c name=/runtime.v1.RuntimeService/Version
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.425805184Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be1f6654-3013-4aa8-a748-356b42e5efb1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.426233730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870153426209418,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be1f6654-3013-4aa8-a748-356b42e5efb1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.426881851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=177ebc0c-da54-4147-b625-1c924a2332f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.426940180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=177ebc0c-da54-4147-b625-1c924a2332f0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:35:53 embed-certs-014980 crio[707]: time="2024-08-28 18:35:53.427163654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03453e02aa9963c458c74885933fadfdd4a6e0e674b401d0361eb3fdddaa3f7a,PodSandboxId:b36f0b3836447f5dfa26944f9a6b103e7d8ddce00e71ea4c0a99ee66f18ad845,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869603101176033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e09413-b695-420e-bf45-1f8f40ff7d05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e333800301f7f5da0b122571c15d46079b6c961b10e512cf03cf8bf22d3cb8c1,PodSandboxId:61e4b445685f3aadbb896849a4708aab7b1c419cdabe2087303eedc82d6718c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602024719344,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cz29x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd89ac5c-011e-4810-b681-fae999af2b6b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba7772bf1d640f5f4d72968ec1a1116f54245ab8b771d59b998c205ec11cb27,PodSandboxId:c3e3e300f9423a39398404417b02e02644ab23c0261df4a3ae93b58bd5496836,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602025001342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-djjbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
c3e4fc9-c257-40c5-bee2-6ad7335e8bf8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8726422898139a9becd3740f00b52f0615c668e67a93e3afcd345f69ab56174,PodSandboxId:3ce46a9948a7b50a9b5612fdb22b9467b2c2a2c1ee2a4af3b317da6a834d5f43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724869601331066351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzw4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46e7805-0395-40ae-92e6-ab43eb4b2b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4e7e2bb458e745c253692660330618c1fee23d63e5a177ac89d5371d6b6a87,PodSandboxId:4b4e3ea46ba40410cffa36c45bba7e65f368b981d6ec0267d239398307a0c7aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869590515966426,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be8df2afc75f1ee8c35748a5ed7b7b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2026b2021e7a1ef7de3865619a82c63953a5be5296464d79ccfb3ab1ac6a17,PodSandboxId:0057f243a3f39819998caf62a3c9331f5e175e2daa0d5aa1f52841f33e9a4541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869590513114880,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bca478b04e382b536c96c7dc6610af,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c7c0076722fa61714fa39f3164f13b568bcc53d70189f01180f8d897be448d,PodSandboxId:33f16483fba1945d5606cfdf50b4e677dca862c716421ce82d155dfad4756a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869590438408124,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dfa18e75e434576541ce5bd5918f485703f75fd26a44c32607e7389b3cf8ef,PodSandboxId:61d32464e573483e9c6f06a3357c9bf30043da9e4deea79b0b3a91823bf816db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869590420885095,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3105e9c370c576e3c2b7f7033575b471,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfe0e43ef655ecb760c941414f1de49e1cd57e156eaf03cfbc503bc80719eba,PodSandboxId:da2c894e5dfb6f28b71b5efec41389df398a03747668ff9c19f0f8e8231fd1cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724869302441758358,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=177ebc0c-da54-4147-b625-1c924a2332f0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	03453e02aa996       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   b36f0b3836447       storage-provisioner
	aba7772bf1d64       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   c3e3e300f9423       coredns-6f6b679f8f-djjbq
	e333800301f7f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   61e4b445685f3       coredns-6f6b679f8f-cz29x
	a872642289813       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   3ce46a9948a7b       kube-proxy-hzw4m
	2b4e7e2bb458e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   4b4e3ea46ba40       kube-scheduler-embed-certs-014980
	4b2026b2021e7       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   0057f243a3f39       kube-controller-manager-embed-certs-014980
	75c7c0076722f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   33f16483fba19       kube-apiserver-embed-certs-014980
	94dfa18e75e43       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   61d32464e5734       etcd-embed-certs-014980
	fdfe0e43ef655       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   da2c894e5dfb6       kube-apiserver-embed-certs-014980
	
	
	==> coredns [aba7772bf1d640f5f4d72968ec1a1116f54245ab8b771d59b998c205ec11cb27] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e333800301f7f5da0b122571c15d46079b6c961b10e512cf03cf8bf22d3cb8c1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-014980
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-014980
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=embed-certs-014980
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T18_26_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 18:26:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-014980
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 18:35:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 18:31:51 +0000   Wed, 28 Aug 2024 18:26:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 18:31:51 +0000   Wed, 28 Aug 2024 18:26:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 18:31:51 +0000   Wed, 28 Aug 2024 18:26:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 18:31:51 +0000   Wed, 28 Aug 2024 18:26:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.130
	  Hostname:    embed-certs-014980
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a365d754a9c94a5cbea721201dfbc6d0
	  System UUID:                a365d754-a9c9-4a5c-bea7-21201dfbc6d0
	  Boot ID:                    10d1724c-b9f0-41cf-8a3a-201f51d4a3fb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-cz29x                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-6f6b679f8f-djjbq                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-embed-certs-014980                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-embed-certs-014980             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-014980    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-hzw4m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-embed-certs-014980             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-6867b74b74-7nkmb               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m11s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node embed-certs-014980 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node embed-certs-014980 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node embed-certs-014980 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node embed-certs-014980 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node embed-certs-014980 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node embed-certs-014980 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s                  node-controller  Node embed-certs-014980 event: Registered Node embed-certs-014980 in Controller
	
	
	==> dmesg <==
	[  +0.051109] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036651] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.723842] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.898225] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.520794] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.185280] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.056169] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059224] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.178149] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.168839] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.291936] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[  +3.986311] systemd-fstab-generator[789]: Ignoring "noauto" option for root device
	[  +1.810839] systemd-fstab-generator[908]: Ignoring "noauto" option for root device
	[  +0.062133] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.492248] kauditd_printk_skb: 69 callbacks suppressed
	[  +8.024521] kauditd_printk_skb: 85 callbacks suppressed
	[Aug28 18:26] systemd-fstab-generator[2546]: Ignoring "noauto" option for root device
	[  +0.061496] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.010292] systemd-fstab-generator[2872]: Ignoring "noauto" option for root device
	[  +0.079197] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.732577] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.073541] systemd-fstab-generator[3017]: Ignoring "noauto" option for root device
	[  +6.948507] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [94dfa18e75e434576541ce5bd5918f485703f75fd26a44c32607e7389b3cf8ef] <==
	{"level":"info","ts":"2024-08-28T18:26:30.918817Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-28T18:26:30.919045Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.130:2380"}
	{"level":"info","ts":"2024-08-28T18:26:30.922041Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.130:2380"}
	{"level":"info","ts":"2024-08-28T18:26:30.922001Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ce8d6acd2292c3b4","initial-advertise-peer-urls":["https://192.168.72.130:2380"],"listen-peer-urls":["https://192.168.72.130:2380"],"advertise-client-urls":["https://192.168.72.130:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.130:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-28T18:26:30.922021Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-28T18:26:31.450029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-28T18:26:31.450130Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-28T18:26:31.450181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 received MsgPreVoteResp from ce8d6acd2292c3b4 at term 1"}
	{"level":"info","ts":"2024-08-28T18:26:31.450223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 became candidate at term 2"}
	{"level":"info","ts":"2024-08-28T18:26:31.450259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 received MsgVoteResp from ce8d6acd2292c3b4 at term 2"}
	{"level":"info","ts":"2024-08-28T18:26:31.450294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ce8d6acd2292c3b4 became leader at term 2"}
	{"level":"info","ts":"2024-08-28T18:26:31.450329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ce8d6acd2292c3b4 elected leader ce8d6acd2292c3b4 at term 2"}
	{"level":"info","ts":"2024-08-28T18:26:31.451759Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T18:26:31.452641Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ce8d6acd2292c3b4","local-member-attributes":"{Name:embed-certs-014980 ClientURLs:[https://192.168.72.130:2379]}","request-path":"/0/members/ce8d6acd2292c3b4/attributes","cluster-id":"5be39efd7fce098c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T18:26:31.452702Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T18:26:31.453119Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5be39efd7fce098c","local-member-id":"ce8d6acd2292c3b4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T18:26:31.453217Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T18:26:31.453267Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T18:26:31.453296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T18:26:31.454065Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:26:31.454821Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.130:2379"}
	{"level":"info","ts":"2024-08-28T18:26:31.455615Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:26:31.456330Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-28T18:26:31.462705Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T18:26:31.462754Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:35:53 up 14 min,  0 users,  load average: 0.51, 0.30, 0.16
	Linux embed-certs-014980 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [75c7c0076722fa61714fa39f3164f13b568bcc53d70189f01180f8d897be448d] <==
	W0828 18:31:33.998760       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:31:33.998869       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:31:34.000025       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:31:34.000076       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:32:34.000864       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:32:34.000947       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0828 18:32:34.001003       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:32:34.001050       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:32:34.002174       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:32:34.002225       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:34:34.002518       1 handler_proxy.go:99] no RequestInfo found in the context
	W0828 18:34:34.002951       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:34:34.003085       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0828 18:34:34.003118       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:34:34.004359       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:34:34.004434       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [fdfe0e43ef655ecb760c941414f1de49e1cd57e156eaf03cfbc503bc80719eba] <==
	W0828 18:26:22.674173       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.674173       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.758654       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.848517       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.888182       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.936293       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.952450       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.999249       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.125746       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.172785       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.260146       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.307953       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.535462       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.547037       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:26.321287       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:26.958128       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.136137       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.172266       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.389355       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.433403       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.469907       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.539848       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.651643       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.660029       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.798179       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4b2026b2021e7a1ef7de3865619a82c63953a5be5296464d79ccfb3ab1ac6a17] <==
	E0828 18:30:39.922120       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:30:40.455847       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:31:09.931499       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:31:10.464000       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:31:39.938941       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:31:40.471479       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:31:51.265087       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-014980"
	E0828 18:32:09.946022       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:32:10.478417       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:32:30.609040       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="358.121µs"
	E0828 18:32:39.953017       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:32:40.487965       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:32:42.605464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="222.357µs"
	E0828 18:33:09.960852       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:33:10.495307       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:33:39.967902       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:33:40.502388       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:34:09.974675       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:34:10.510930       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:34:39.982607       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:34:40.522305       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:35:09.995606       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:35:10.530744       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:35:40.002241       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:35:40.537956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a8726422898139a9becd3740f00b52f0615c668e67a93e3afcd345f69ab56174] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 18:26:41.683707       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 18:26:41.710825       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.130"]
	E0828 18:26:41.710971       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 18:26:41.953292       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 18:26:41.953376       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 18:26:41.953492       1 server_linux.go:169] "Using iptables Proxier"
	I0828 18:26:41.960469       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 18:26:41.960829       1 server.go:483] "Version info" version="v1.31.0"
	I0828 18:26:41.960858       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:26:41.964688       1 config.go:197] "Starting service config controller"
	I0828 18:26:41.964719       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 18:26:41.964743       1 config.go:104] "Starting endpoint slice config controller"
	I0828 18:26:41.964746       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 18:26:41.968813       1 config.go:326] "Starting node config controller"
	I0828 18:26:41.968835       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 18:26:42.065416       1 shared_informer.go:320] Caches are synced for service config
	I0828 18:26:42.065369       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 18:26:42.075609       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2b4e7e2bb458e745c253692660330618c1fee23d63e5a177ac89d5371d6b6a87] <==
	W0828 18:26:33.111291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 18:26:33.116320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:33.938848       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0828 18:26:33.938901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.074040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0828 18:26:34.074094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.126229       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 18:26:34.126293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.131991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 18:26:34.132053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.171507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0828 18:26:34.171622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.244746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 18:26:34.244802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.256820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 18:26:34.256982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.266789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0828 18:26:34.266841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.277466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 18:26:34.277614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.314412       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 18:26:34.314465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.546614       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 18:26:34.546662       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0828 18:26:37.289248       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 18:34:39 embed-certs-014980 kubelet[2879]: E0828 18:34:39.591874    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7nkmb" podUID="bd303839-96c1-4e38-b7cb-2e66ba627a69"
	Aug 28 18:34:45 embed-certs-014980 kubelet[2879]: E0828 18:34:45.761779    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870085761376926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:34:45 embed-certs-014980 kubelet[2879]: E0828 18:34:45.761835    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870085761376926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:34:52 embed-certs-014980 kubelet[2879]: E0828 18:34:52.590724    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7nkmb" podUID="bd303839-96c1-4e38-b7cb-2e66ba627a69"
	Aug 28 18:34:55 embed-certs-014980 kubelet[2879]: E0828 18:34:55.767702    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870095763223465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:34:55 embed-certs-014980 kubelet[2879]: E0828 18:34:55.767751    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870095763223465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:04 embed-certs-014980 kubelet[2879]: E0828 18:35:04.591305    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7nkmb" podUID="bd303839-96c1-4e38-b7cb-2e66ba627a69"
	Aug 28 18:35:05 embed-certs-014980 kubelet[2879]: E0828 18:35:05.770260    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870105769642288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:05 embed-certs-014980 kubelet[2879]: E0828 18:35:05.770297    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870105769642288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:15 embed-certs-014980 kubelet[2879]: E0828 18:35:15.771716    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870115771386593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:15 embed-certs-014980 kubelet[2879]: E0828 18:35:15.771760    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870115771386593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:17 embed-certs-014980 kubelet[2879]: E0828 18:35:17.591204    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7nkmb" podUID="bd303839-96c1-4e38-b7cb-2e66ba627a69"
	Aug 28 18:35:25 embed-certs-014980 kubelet[2879]: E0828 18:35:25.773648    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870125773338041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:25 embed-certs-014980 kubelet[2879]: E0828 18:35:25.773692    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870125773338041,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:28 embed-certs-014980 kubelet[2879]: E0828 18:35:28.590251    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7nkmb" podUID="bd303839-96c1-4e38-b7cb-2e66ba627a69"
	Aug 28 18:35:35 embed-certs-014980 kubelet[2879]: E0828 18:35:35.612704    2879 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 18:35:35 embed-certs-014980 kubelet[2879]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 18:35:35 embed-certs-014980 kubelet[2879]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 18:35:35 embed-certs-014980 kubelet[2879]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 18:35:35 embed-certs-014980 kubelet[2879]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 18:35:35 embed-certs-014980 kubelet[2879]: E0828 18:35:35.775588    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870135775284285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:35 embed-certs-014980 kubelet[2879]: E0828 18:35:35.775626    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870135775284285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:42 embed-certs-014980 kubelet[2879]: E0828 18:35:42.590276    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7nkmb" podUID="bd303839-96c1-4e38-b7cb-2e66ba627a69"
	Aug 28 18:35:45 embed-certs-014980 kubelet[2879]: E0828 18:35:45.776664    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870145776380842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:45 embed-certs-014980 kubelet[2879]: E0828 18:35:45.776711    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870145776380842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [03453e02aa9963c458c74885933fadfdd4a6e0e674b401d0361eb3fdddaa3f7a] <==
	I0828 18:26:43.192006       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 18:26:43.201448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 18:26:43.201597       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 18:26:43.209505       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 18:26:43.209952       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"249220a2-967f-454b-a646-05777cbb0811", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-014980_9f3ed12a-e0f5-421a-a7cf-9808813b563a became leader
	I0828 18:26:43.210006       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-014980_9f3ed12a-e0f5-421a-a7cf-9808813b563a!
	I0828 18:26:43.310621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-014980_9f3ed12a-e0f5-421a-a7cf-9808813b563a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-014980 -n embed-certs-014980
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-014980 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-7nkmb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-014980 describe pod metrics-server-6867b74b74-7nkmb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-014980 describe pod metrics-server-6867b74b74-7nkmb: exit status 1 (60.925058ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-7nkmb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-014980 describe pod metrics-server-6867b74b74-7nkmb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0828 18:27:34.435005   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:28:00.240662   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:28:09.993261   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:28:51.163080   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:29:21.459955   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:29:23.524587   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:30:14.229088   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-072854 -n no-preload-072854
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-28 18:36:18.825897877 +0000 UTC m=+6297.876459859
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-072854 -n no-preload-072854
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-072854 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-072854 logs -n 25: (2.018108386s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo find                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo crio                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-647068                                       | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:14 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-072854             | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-014980            | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-640552  | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-072854                  | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC | 28 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-131737        | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-014980                 | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-640552       | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-131737             | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 18:18:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 18:18:45.197319   77396 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:18:45.197606   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197616   77396 out.go:358] Setting ErrFile to fd 2...
	I0828 18:18:45.197621   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197793   77396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:18:45.198351   77396 out.go:352] Setting JSON to false
	I0828 18:18:45.199218   77396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7271,"bootTime":1724861854,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:18:45.199316   77396 start.go:139] virtualization: kvm guest
	I0828 18:18:45.201168   77396 out.go:177] * [old-k8s-version-131737] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:18:45.202252   77396 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:18:45.202312   77396 notify.go:220] Checking for updates...
	I0828 18:18:45.204563   77396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:18:45.205713   77396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:18:45.206652   77396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:18:45.207806   77396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:18:45.208891   77396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:18:45.210308   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:18:45.210717   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.210780   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.225409   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0828 18:18:45.225806   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.226318   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.226338   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.226722   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.226903   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.228685   77396 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 18:18:45.229863   77396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:18:45.230199   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.230243   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.245150   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0828 18:18:45.245641   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.246164   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.246199   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.246486   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.246677   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.282499   77396 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 18:18:45.283789   77396 start.go:297] selected driver: kvm2
	I0828 18:18:45.283804   77396 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.283918   77396 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:18:45.284594   77396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.284693   77396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:18:45.299887   77396 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:18:45.300236   77396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:18:45.300266   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:18:45.300274   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:18:45.300308   77396 start.go:340] cluster config:
	{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.300419   77396 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.302883   77396 out.go:177] * Starting "old-k8s-version-131737" primary control-plane node in "old-k8s-version-131737" cluster
	I0828 18:18:41.610368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:44.682293   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:45.304152   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:18:45.304189   77396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:18:45.304208   77396 cache.go:56] Caching tarball of preloaded images
	I0828 18:18:45.304295   77396 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:18:45.304305   77396 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0828 18:18:45.304426   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:18:45.304608   77396 start.go:360] acquireMachinesLock for old-k8s-version-131737: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:18:50.762367   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:53.834404   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:59.914331   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:02.986351   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:09.066375   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:12.138382   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:18.218324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:21.290324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:27.370327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:30.442342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:36.522377   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:39.594396   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:45.674327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:48.746316   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:54.826346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:57.898388   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:03.978342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:07.050322   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:13.130368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:16.202305   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:22.282326   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:25.354374   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:31.434381   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:34.506312   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:40.586353   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:43.658361   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:49.738343   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:52.810329   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:58.890346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:01.962342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:08.042323   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:11.114385   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:14.118406   76435 start.go:364] duration metric: took 4m0.584080771s to acquireMachinesLock for "embed-certs-014980"
	I0828 18:21:14.118470   76435 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:14.118492   76435 fix.go:54] fixHost starting: 
	I0828 18:21:14.118808   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:14.118834   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:14.134434   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0828 18:21:14.134863   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:14.135369   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:21:14.135398   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:14.135717   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:14.135891   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:14.136052   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:21:14.137681   76435 fix.go:112] recreateIfNeeded on embed-certs-014980: state=Stopped err=<nil>
	I0828 18:21:14.137705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	W0828 18:21:14.137861   76435 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:14.139602   76435 out.go:177] * Restarting existing kvm2 VM for "embed-certs-014980" ...
	I0828 18:21:14.116153   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:14.116188   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116549   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:21:14.116581   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116758   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:21:14.118261   75908 machine.go:96] duration metric: took 4m37.42460751s to provisionDockerMachine
	I0828 18:21:14.118302   75908 fix.go:56] duration metric: took 4m37.4457415s for fixHost
	I0828 18:21:14.118309   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 4m37.445770955s
	W0828 18:21:14.118326   75908 start.go:714] error starting host: provision: host is not running
	W0828 18:21:14.118418   75908 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0828 18:21:14.118430   75908 start.go:729] Will try again in 5 seconds ...
	I0828 18:21:14.140812   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Start
	I0828 18:21:14.140967   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring networks are active...
	I0828 18:21:14.141716   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network default is active
	I0828 18:21:14.142021   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network mk-embed-certs-014980 is active
	I0828 18:21:14.142397   76435 main.go:141] libmachine: (embed-certs-014980) Getting domain xml...
	I0828 18:21:14.143109   76435 main.go:141] libmachine: (embed-certs-014980) Creating domain...
	I0828 18:21:15.352062   76435 main.go:141] libmachine: (embed-certs-014980) Waiting to get IP...
	I0828 18:21:15.352991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.353345   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.353418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.353319   77926 retry.go:31] will retry after 289.130703ms: waiting for machine to come up
	I0828 18:21:15.644017   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.644460   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.644482   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.644434   77926 retry.go:31] will retry after 240.747341ms: waiting for machine to come up
	I0828 18:21:15.886897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.887308   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.887340   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.887258   77926 retry.go:31] will retry after 467.167731ms: waiting for machine to come up
	I0828 18:21:16.355790   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.356204   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.356232   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.356160   77926 retry.go:31] will retry after 506.51967ms: waiting for machine to come up
	I0828 18:21:16.863907   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.864309   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.864343   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.864264   77926 retry.go:31] will retry after 458.679357ms: waiting for machine to come up
	I0828 18:21:17.324948   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.325436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.325462   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.325385   77926 retry.go:31] will retry after 604.433375ms: waiting for machine to come up
	I0828 18:21:17.931169   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.931568   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.931614   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.931507   77926 retry.go:31] will retry after 852.10168ms: waiting for machine to come up
	I0828 18:21:19.120844   75908 start.go:360] acquireMachinesLock for no-preload-072854: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:21:18.785312   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:18.785735   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:18.785762   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:18.785682   77926 retry.go:31] will retry after 1.332568679s: waiting for machine to come up
	I0828 18:21:20.119550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:20.119990   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:20.120016   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:20.119947   77926 retry.go:31] will retry after 1.606559109s: waiting for machine to come up
	I0828 18:21:21.727719   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:21.728147   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:21.728175   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:21.728091   77926 retry.go:31] will retry after 1.901370923s: waiting for machine to come up
	I0828 18:21:23.632187   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:23.632554   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:23.632578   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:23.632509   77926 retry.go:31] will retry after 2.387413646s: waiting for machine to come up
	I0828 18:21:26.022576   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:26.022902   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:26.022924   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:26.022862   77926 retry.go:31] will retry after 3.196331032s: waiting for machine to come up
	I0828 18:21:33.374810   76486 start.go:364] duration metric: took 4m17.539072759s to acquireMachinesLock for "default-k8s-diff-port-640552"
	I0828 18:21:33.374877   76486 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:33.374898   76486 fix.go:54] fixHost starting: 
	I0828 18:21:33.375317   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:33.375357   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:33.392734   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0828 18:21:33.393239   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:33.393761   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:21:33.393783   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:33.394131   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:33.394347   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:33.394547   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:21:33.395998   76486 fix.go:112] recreateIfNeeded on default-k8s-diff-port-640552: state=Stopped err=<nil>
	I0828 18:21:33.396038   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	W0828 18:21:33.396210   76486 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:33.398362   76486 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-640552" ...
	I0828 18:21:29.220396   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:29.220861   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:29.220897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:29.220820   77926 retry.go:31] will retry after 2.802196616s: waiting for machine to come up
	I0828 18:21:32.026808   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027298   76435 main.go:141] libmachine: (embed-certs-014980) Found IP for machine: 192.168.72.130
	I0828 18:21:32.027319   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has current primary IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027325   76435 main.go:141] libmachine: (embed-certs-014980) Reserving static IP address...
	I0828 18:21:32.027698   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.027764   76435 main.go:141] libmachine: (embed-certs-014980) DBG | skip adding static IP to network mk-embed-certs-014980 - found existing host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"}
	I0828 18:21:32.027781   76435 main.go:141] libmachine: (embed-certs-014980) Reserved static IP address: 192.168.72.130
	I0828 18:21:32.027800   76435 main.go:141] libmachine: (embed-certs-014980) Waiting for SSH to be available...
	I0828 18:21:32.027814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Getting to WaitForSSH function...
	I0828 18:21:32.029740   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030020   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.030051   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030171   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH client type: external
	I0828 18:21:32.030200   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa (-rw-------)
	I0828 18:21:32.030235   76435 main.go:141] libmachine: (embed-certs-014980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:32.030249   76435 main.go:141] libmachine: (embed-certs-014980) DBG | About to run SSH command:
	I0828 18:21:32.030264   76435 main.go:141] libmachine: (embed-certs-014980) DBG | exit 0
	I0828 18:21:32.153760   76435 main.go:141] libmachine: (embed-certs-014980) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:32.154184   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetConfigRaw
	I0828 18:21:32.154807   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.157116   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157449   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.157472   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157662   76435 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/config.json ...
	I0828 18:21:32.157857   76435 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:32.157873   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:32.158051   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.160224   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160516   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.160550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.160877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.160999   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.161141   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.161310   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.161509   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.161528   76435 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:32.270041   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:32.270070   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270351   76435 buildroot.go:166] provisioning hostname "embed-certs-014980"
	I0828 18:21:32.270375   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270568   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.273124   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273480   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.273509   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273626   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.273774   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.273941   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.274062   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.274264   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.274435   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.274448   76435 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-014980 && echo "embed-certs-014980" | sudo tee /etc/hostname
	I0828 18:21:32.401452   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-014980
	
	I0828 18:21:32.401473   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.404278   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404622   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.404672   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404785   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.405012   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405167   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405312   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.405525   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.405697   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.405714   76435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-014980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-014980/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-014980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:32.523970   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:32.523997   76435 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:32.524044   76435 buildroot.go:174] setting up certificates
	I0828 18:21:32.524054   76435 provision.go:84] configureAuth start
	I0828 18:21:32.524063   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.524374   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.527040   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527391   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.527418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527540   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.529680   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.529986   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.530006   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.530170   76435 provision.go:143] copyHostCerts
	I0828 18:21:32.530220   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:32.530237   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:32.530306   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:32.530387   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:32.530399   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:32.530423   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:32.530475   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:32.530481   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:32.530502   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:32.530556   76435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.embed-certs-014980 san=[127.0.0.1 192.168.72.130 embed-certs-014980 localhost minikube]
	I0828 18:21:32.755911   76435 provision.go:177] copyRemoteCerts
	I0828 18:21:32.755967   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:32.755990   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.758640   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.758944   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.758981   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.759123   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.759306   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.759442   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.759554   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:32.843219   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:32.867929   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0828 18:21:32.890143   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:32.911983   76435 provision.go:87] duration metric: took 387.917809ms to configureAuth
	I0828 18:21:32.912013   76435 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:32.912199   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:32.912281   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.914814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915154   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.915188   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915321   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.915550   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915717   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915899   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.916116   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.916323   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.916378   76435 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:33.137477   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:33.137500   76435 machine.go:96] duration metric: took 979.632081ms to provisionDockerMachine
	I0828 18:21:33.137513   76435 start.go:293] postStartSetup for "embed-certs-014980" (driver="kvm2")
	I0828 18:21:33.137526   76435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:33.137564   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.137847   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:33.137877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.140267   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140555   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.140584   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140731   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.140922   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.141078   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.141223   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.224499   76435 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:33.228643   76435 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:33.228672   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:33.228755   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:33.228855   76435 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:33.229038   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:33.238208   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:33.260348   76435 start.go:296] duration metric: took 122.819807ms for postStartSetup
	I0828 18:21:33.260400   76435 fix.go:56] duration metric: took 19.141917324s for fixHost
	I0828 18:21:33.260424   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.262763   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263139   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.263168   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263289   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.263482   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263659   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263871   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.264050   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:33.264216   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:33.264226   76435 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:33.374640   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869293.352212530
	
	I0828 18:21:33.374664   76435 fix.go:216] guest clock: 1724869293.352212530
	I0828 18:21:33.374687   76435 fix.go:229] Guest: 2024-08-28 18:21:33.35221253 +0000 UTC Remote: 2024-08-28 18:21:33.260405829 +0000 UTC m=+259.867297948 (delta=91.806701ms)
	I0828 18:21:33.374708   76435 fix.go:200] guest clock delta is within tolerance: 91.806701ms
	I0828 18:21:33.374713   76435 start.go:83] releasing machines lock for "embed-certs-014980", held for 19.256266619s
	I0828 18:21:33.374735   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.374991   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:33.377975   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378411   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.378436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378623   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379150   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379317   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379409   76435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:33.379465   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.379568   76435 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:33.379594   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.381991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382015   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382323   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382354   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382379   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382438   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382493   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382687   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382876   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382907   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383029   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383033   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.383145   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.508142   76435 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:33.514436   76435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:33.661055   76435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:33.666718   76435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:33.666774   76435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:33.683142   76435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:33.683169   76435 start.go:495] detecting cgroup driver to use...
	I0828 18:21:33.683253   76435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:33.698356   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:33.711626   76435 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:33.711690   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:33.724743   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:33.738782   76435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:33.852946   76435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:33.990370   76435 docker.go:233] disabling docker service ...
	I0828 18:21:33.990440   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:34.004746   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:34.017220   76435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:34.174534   76435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:34.320863   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:34.333880   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:34.351859   76435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:34.351907   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.362142   76435 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:34.362223   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.372261   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.382374   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.396994   76435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:34.412126   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.422585   76435 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.439314   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.449667   76435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:34.458389   76435 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:34.458449   76435 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:34.471501   76435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:34.480915   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:34.617633   76435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:34.731432   76435 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:34.731508   76435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:34.736417   76435 start.go:563] Will wait 60s for crictl version
	I0828 18:21:34.736464   76435 ssh_runner.go:195] Run: which crictl
	I0828 18:21:34.740213   76435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:34.776804   76435 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:34.776908   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.806826   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.837961   76435 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:33.399527   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Start
	I0828 18:21:33.399696   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring networks are active...
	I0828 18:21:33.400382   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network default is active
	I0828 18:21:33.400737   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network mk-default-k8s-diff-port-640552 is active
	I0828 18:21:33.401099   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Getting domain xml...
	I0828 18:21:33.401809   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Creating domain...
	I0828 18:21:34.684850   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting to get IP...
	I0828 18:21:34.685612   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.685998   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.686063   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.685980   78067 retry.go:31] will retry after 291.65765ms: waiting for machine to come up
	I0828 18:21:34.979550   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980029   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980051   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.979993   78067 retry.go:31] will retry after 274.75755ms: waiting for machine to come up
	I0828 18:21:35.256257   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256724   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256752   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.256666   78067 retry.go:31] will retry after 455.404257ms: waiting for machine to come up
	I0828 18:21:35.714147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714683   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714716   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.714635   78067 retry.go:31] will retry after 426.56406ms: waiting for machine to come up
	I0828 18:21:34.839157   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:34.842000   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842417   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:34.842443   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842650   76435 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:34.846628   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:34.859098   76435 kubeadm.go:883] updating cluster {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:34.859212   76435 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:34.859259   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:34.898150   76435 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:34.898233   76435 ssh_runner.go:195] Run: which lz4
	I0828 18:21:34.902220   76435 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:34.906463   76435 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:34.906498   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:36.168426   76435 crio.go:462] duration metric: took 1.26624881s to copy over tarball
	I0828 18:21:36.168514   76435 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:38.266205   76435 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.097659696s)
	I0828 18:21:38.266252   76435 crio.go:469] duration metric: took 2.097775234s to extract the tarball
	I0828 18:21:38.266264   76435 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:38.302870   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:38.349495   76435 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:38.349527   76435 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:38.349538   76435 kubeadm.go:934] updating node { 192.168.72.130 8443 v1.31.0 crio true true} ...
	I0828 18:21:38.349672   76435 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-014980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:38.349761   76435 ssh_runner.go:195] Run: crio config
	I0828 18:21:38.393310   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:38.393333   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:38.393346   76435 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:38.393367   76435 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-014980 NodeName:embed-certs-014980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:38.393502   76435 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-014980"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:38.393561   76435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:38.403059   76435 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:38.403128   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:38.411944   76435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0828 18:21:38.427006   76435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:36.143403   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143961   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.143901   78067 retry.go:31] will retry after 623.404625ms: waiting for machine to come up
	I0828 18:21:36.768738   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769339   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.769256   78067 retry.go:31] will retry after 750.082443ms: waiting for machine to come up
	I0828 18:21:37.521122   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521604   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521633   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:37.521562   78067 retry.go:31] will retry after 837.989492ms: waiting for machine to come up
	I0828 18:21:38.361659   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362111   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362140   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:38.362056   78067 retry.go:31] will retry after 1.13122193s: waiting for machine to come up
	I0828 18:21:39.495248   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495643   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495673   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:39.495578   78067 retry.go:31] will retry after 1.180862235s: waiting for machine to come up
	I0828 18:21:40.677748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678090   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678117   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:40.678045   78067 retry.go:31] will retry after 2.245023454s: waiting for machine to come up
	I0828 18:21:38.441960   76435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0828 18:21:38.457509   76435 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:38.461003   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:38.472232   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:38.591387   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:38.606911   76435 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980 for IP: 192.168.72.130
	I0828 18:21:38.606935   76435 certs.go:194] generating shared ca certs ...
	I0828 18:21:38.606957   76435 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:38.607122   76435 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:38.607186   76435 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:38.607199   76435 certs.go:256] generating profile certs ...
	I0828 18:21:38.607304   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/client.key
	I0828 18:21:38.607398   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key.f4b1f9f1
	I0828 18:21:38.607449   76435 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key
	I0828 18:21:38.607595   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:38.607634   76435 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:38.607646   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:38.607679   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:38.607726   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:38.607756   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:38.607808   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:38.608698   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:38.647796   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:38.685835   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:38.738515   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:38.769248   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0828 18:21:38.795091   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:38.816857   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:38.839153   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:38.861024   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:38.882488   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:38.905023   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:38.927997   76435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:38.945870   76435 ssh_runner.go:195] Run: openssl version
	I0828 18:21:38.951753   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:38.962635   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966847   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966895   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.972529   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:38.982689   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:38.992812   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996942   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996991   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:39.002359   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:39.012423   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:39.022765   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.026945   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.027007   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.032233   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:39.042709   76435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:39.046904   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:39.052563   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:39.057937   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:39.063465   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:39.068788   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:39.074233   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:39.079673   76435 kubeadm.go:392] StartCluster: {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:39.079776   76435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:39.079824   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.120250   76435 cri.go:89] found id: ""
	I0828 18:21:39.120331   76435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:39.130147   76435 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:39.130171   76435 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:39.130223   76435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:39.139586   76435 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:39.140642   76435 kubeconfig.go:125] found "embed-certs-014980" server: "https://192.168.72.130:8443"
	I0828 18:21:39.142695   76435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:39.152102   76435 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I0828 18:21:39.152136   76435 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:39.152149   76435 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:39.152225   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.189811   76435 cri.go:89] found id: ""
	I0828 18:21:39.189899   76435 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:39.205579   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:39.215378   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:39.215401   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:39.215451   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:21:39.225068   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:39.225136   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:39.234254   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:21:39.243009   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:39.243072   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:39.252251   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.261241   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:39.261314   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.270443   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:21:39.278999   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:39.279070   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:39.288033   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:39.297331   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:39.396232   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.225819   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.420586   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.482893   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.601563   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:40.601672   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.101955   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.602572   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.102180   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.602520   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.635705   76435 api_server.go:72] duration metric: took 2.034151361s to wait for apiserver process to appear ...
	I0828 18:21:42.635738   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:21:42.635762   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.636263   76435 api_server.go:269] stopped: https://192.168.72.130:8443/healthz: Get "https://192.168.72.130:8443/healthz": dial tcp 192.168.72.130:8443: connect: connection refused
	I0828 18:21:43.136019   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.925748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926265   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926293   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:42.926217   78067 retry.go:31] will retry after 2.565646238s: waiting for machine to come up
	I0828 18:21:45.494477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495032   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495058   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:45.494982   78067 retry.go:31] will retry after 2.418376782s: waiting for machine to come up
	I0828 18:21:45.980398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:45.980429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:45.980444   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.010352   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:46.010385   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:46.136576   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.141398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.141429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:46.635898   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.641672   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.641712   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.136295   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.142623   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:47.142657   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.636199   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.640325   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:21:47.647198   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:21:47.647226   76435 api_server.go:131] duration metric: took 5.011481159s to wait for apiserver health ...
	I0828 18:21:47.647236   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:47.647245   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:47.649638   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:21:47.650998   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:21:47.662361   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:21:47.683446   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:21:47.696066   76435 system_pods.go:59] 8 kube-system pods found
	I0828 18:21:47.696100   76435 system_pods.go:61] "coredns-6f6b679f8f-4g2n8" [9c34e013-4c11-4805-9d58-987bb130f1b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:21:47.696120   76435 system_pods.go:61] "etcd-embed-certs-014980" [164f2ce3-8df6-4e56-a959-80b08848a181] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:21:47.696133   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [c637e3e0-4e54-44b1-8eb7-ea11d3b5753a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:21:47.696143   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [2d786cc0-a0c7-430c-89e1-9889e919289d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:21:47.696149   76435 system_pods.go:61] "kube-proxy-4lz5q" [a5f2213b-6b36-4656-8a26-26903bc09441] Running
	I0828 18:21:47.696158   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [2aa3787a-7a70-4cfb-8810-9f4e02240012] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:21:47.696167   76435 system_pods.go:61] "metrics-server-6867b74b74-f56j2" [91d30fa3-cc63-4d61-8cd3-46ecc950c31f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:21:47.696176   76435 system_pods.go:61] "storage-provisioner" [54d357f5-8f8a-429b-81db-40c9f2857fde] Running
	I0828 18:21:47.696185   76435 system_pods.go:74] duration metric: took 12.718326ms to wait for pod list to return data ...
	I0828 18:21:47.696198   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:21:47.699492   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:21:47.699515   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:21:47.699528   76435 node_conditions.go:105] duration metric: took 3.324668ms to run NodePressure ...
	I0828 18:21:47.699548   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:47.970122   76435 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973854   76435 kubeadm.go:739] kubelet initialised
	I0828 18:21:47.973874   76435 kubeadm.go:740] duration metric: took 3.724056ms waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973881   76435 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:21:47.978377   76435 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:21:47.916599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.916976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.917015   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:47.916941   78067 retry.go:31] will retry after 3.1564792s: waiting for machine to come up
	I0828 18:21:52.286982   77396 start.go:364] duration metric: took 3m6.98234152s to acquireMachinesLock for "old-k8s-version-131737"
	I0828 18:21:52.287057   77396 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:52.287069   77396 fix.go:54] fixHost starting: 
	I0828 18:21:52.287554   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:52.287595   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:52.305954   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0828 18:21:52.306439   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:52.306908   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:21:52.306928   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:52.307228   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:52.307404   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:21:52.307571   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetState
	I0828 18:21:52.309284   77396 fix.go:112] recreateIfNeeded on old-k8s-version-131737: state=Stopped err=<nil>
	I0828 18:21:52.309322   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	W0828 18:21:52.309508   77396 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:52.311369   77396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-131737" ...
	I0828 18:21:49.984379   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.985536   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.075186   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.075681   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Found IP for machine: 192.168.39.226
	I0828 18:21:51.075698   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserving static IP address...
	I0828 18:21:51.075746   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has current primary IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.076159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.076184   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | skip adding static IP to network mk-default-k8s-diff-port-640552 - found existing host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"}
	I0828 18:21:51.076201   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserved static IP address: 192.168.39.226
	I0828 18:21:51.076218   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for SSH to be available...
	I0828 18:21:51.076230   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Getting to WaitForSSH function...
	I0828 18:21:51.078435   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078745   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.078766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078967   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH client type: external
	I0828 18:21:51.079000   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa (-rw-------)
	I0828 18:21:51.079053   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:51.079079   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | About to run SSH command:
	I0828 18:21:51.079114   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | exit 0
	I0828 18:21:51.205844   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:51.206145   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetConfigRaw
	I0828 18:21:51.206821   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.209159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.209563   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209753   76486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/config.json ...
	I0828 18:21:51.209980   76486 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:51.209999   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:51.210244   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.212321   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212651   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.212677   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212800   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.212971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213273   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.213408   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.213639   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.213650   76486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:51.330211   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:51.330249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330530   76486 buildroot.go:166] provisioning hostname "default-k8s-diff-port-640552"
	I0828 18:21:51.330558   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330820   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.333492   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.333855   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.333885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.334027   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.334249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334469   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334658   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.334844   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.335003   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.335015   76486 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-640552 && echo "default-k8s-diff-port-640552" | sudo tee /etc/hostname
	I0828 18:21:51.459660   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-640552
	
	I0828 18:21:51.459690   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.462286   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462636   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.462668   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462842   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.463034   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463181   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463307   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.463470   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.463650   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.463682   76486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-640552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-640552/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-640552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:51.581714   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:51.581740   76486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:51.581777   76486 buildroot.go:174] setting up certificates
	I0828 18:21:51.581792   76486 provision.go:84] configureAuth start
	I0828 18:21:51.581807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.582130   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.584626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.584945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.584976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.585073   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.587285   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587672   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.587700   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587868   76486 provision.go:143] copyHostCerts
	I0828 18:21:51.587926   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:51.587946   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:51.588003   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:51.588092   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:51.588100   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:51.588124   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:51.588244   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:51.588255   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:51.588277   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:51.588332   76486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-640552 san=[127.0.0.1 192.168.39.226 default-k8s-diff-port-640552 localhost minikube]
	I0828 18:21:51.657408   76486 provision.go:177] copyRemoteCerts
	I0828 18:21:51.657457   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:51.657480   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.660152   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660494   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.660514   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660709   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.660911   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.661078   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.661251   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:51.751729   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:51.773473   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0828 18:21:51.796174   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:51.817640   76486 provision.go:87] duration metric: took 235.828003ms to configureAuth
	I0828 18:21:51.817672   76486 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:51.817892   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:51.817983   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.820433   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.820780   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.820807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.821016   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.821214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821371   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821533   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.821684   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.821852   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.821870   76486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:52.048026   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:52.048055   76486 machine.go:96] duration metric: took 838.061836ms to provisionDockerMachine
	I0828 18:21:52.048067   76486 start.go:293] postStartSetup for "default-k8s-diff-port-640552" (driver="kvm2")
	I0828 18:21:52.048078   76486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:52.048097   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.048437   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:52.048472   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.051115   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051385   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.051410   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051597   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.051815   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.051971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.052066   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.136350   76486 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:52.140200   76486 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:52.140228   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:52.140303   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:52.140397   76486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:52.140496   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:52.149451   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:52.172381   76486 start.go:296] duration metric: took 124.302384ms for postStartSetup
	I0828 18:21:52.172450   76486 fix.go:56] duration metric: took 18.797536411s for fixHost
	I0828 18:21:52.172477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.174891   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175255   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.175274   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175474   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.175631   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175788   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.176100   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:52.176279   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:52.176289   76486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:52.286801   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869312.259614140
	
	I0828 18:21:52.286827   76486 fix.go:216] guest clock: 1724869312.259614140
	I0828 18:21:52.286835   76486 fix.go:229] Guest: 2024-08-28 18:21:52.25961414 +0000 UTC Remote: 2024-08-28 18:21:52.172457684 +0000 UTC m=+276.471609311 (delta=87.156456ms)
	I0828 18:21:52.286854   76486 fix.go:200] guest clock delta is within tolerance: 87.156456ms
	I0828 18:21:52.286859   76486 start.go:83] releasing machines lock for "default-k8s-diff-port-640552", held for 18.912007294s
	I0828 18:21:52.286884   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.287148   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:52.289951   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290346   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.290370   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290500   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.290976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291228   76486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:52.291282   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.291325   76486 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:52.291344   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.294010   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294039   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294464   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294490   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294637   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294685   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294896   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295185   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295331   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295326   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.295560   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.380284   76486 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:52.421868   76486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:52.563478   76486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:52.569318   76486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:52.569408   76486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:52.585683   76486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:52.585709   76486 start.go:495] detecting cgroup driver to use...
	I0828 18:21:52.585781   76486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:52.603511   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:52.616868   76486 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:52.616930   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:52.631574   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:52.644913   76486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:52.762863   76486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:52.920107   76486 docker.go:233] disabling docker service ...
	I0828 18:21:52.920183   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:52.937155   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:52.951124   76486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:53.063496   76486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:53.187655   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:53.201452   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:53.219663   76486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:53.219734   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.230165   76486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:53.230247   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.240480   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.251258   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.262763   76486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:53.273597   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.283571   76486 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.302935   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.313508   76486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:53.322781   76486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:53.322850   76486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:53.337049   76486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:53.347349   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:53.455027   76486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:53.551547   76486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:53.551607   76486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:53.556960   76486 start.go:563] Will wait 60s for crictl version
	I0828 18:21:53.557066   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:21:53.560695   76486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:53.603636   76486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:53.603721   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.632017   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.664760   76486 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:52.312648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .Start
	I0828 18:21:52.312862   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring networks are active...
	I0828 18:21:52.313682   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network default is active
	I0828 18:21:52.314112   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network mk-old-k8s-version-131737 is active
	I0828 18:21:52.314488   77396 main.go:141] libmachine: (old-k8s-version-131737) Getting domain xml...
	I0828 18:21:52.315180   77396 main.go:141] libmachine: (old-k8s-version-131737) Creating domain...
	I0828 18:21:53.582013   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting to get IP...
	I0828 18:21:53.583124   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.583609   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.583672   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.583582   78246 retry.go:31] will retry after 289.679773ms: waiting for machine to come up
	I0828 18:21:53.875299   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.876115   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.876144   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.876051   78246 retry.go:31] will retry after 263.317798ms: waiting for machine to come up
	I0828 18:21:54.141733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.142310   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.142340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.142257   78246 retry.go:31] will retry after 440.224905ms: waiting for machine to come up
	I0828 18:21:54.584505   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.585061   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.585084   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.585018   78246 retry.go:31] will retry after 379.546405ms: waiting for machine to come up
	I0828 18:21:54.966516   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.967130   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.967153   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.967045   78246 retry.go:31] will retry after 754.463377ms: waiting for machine to come up
	I0828 18:21:53.665810   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:53.668882   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669330   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:53.669352   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669589   76486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:53.673693   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:53.685432   76486 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:53.685546   76486 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:53.685593   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:53.720069   76486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:53.720129   76486 ssh_runner.go:195] Run: which lz4
	I0828 18:21:53.723841   76486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:53.727666   76486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:53.727697   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:54.993725   76486 crio.go:462] duration metric: took 1.269921848s to copy over tarball
	I0828 18:21:54.993802   76486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:53.987677   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:56.485568   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:55.723533   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:55.724021   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:55.724042   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:55.723980   78246 retry.go:31] will retry after 607.743145ms: waiting for machine to come up
	I0828 18:21:56.333733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:56.334181   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:56.334210   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:56.334135   78246 retry.go:31] will retry after 1.098394488s: waiting for machine to come up
	I0828 18:21:57.433729   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:57.434212   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:57.434243   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:57.434157   78246 retry.go:31] will retry after 1.195993343s: waiting for machine to come up
	I0828 18:21:58.631451   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:58.631839   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:58.631867   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:58.631798   78246 retry.go:31] will retry after 1.807712472s: waiting for machine to come up
	I0828 18:21:57.135009   76486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.141177811s)
	I0828 18:21:57.135041   76486 crio.go:469] duration metric: took 2.141292479s to extract the tarball
	I0828 18:21:57.135051   76486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:57.172381   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:57.211971   76486 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:57.211993   76486 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:57.212003   76486 kubeadm.go:934] updating node { 192.168.39.226 8444 v1.31.0 crio true true} ...
	I0828 18:21:57.212123   76486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-640552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:57.212202   76486 ssh_runner.go:195] Run: crio config
	I0828 18:21:57.254347   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:21:57.254378   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:57.254402   76486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:57.254431   76486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-640552 NodeName:default-k8s-diff-port-640552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:57.254630   76486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-640552"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:57.254715   76486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:57.264233   76486 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:57.264304   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:57.273293   76486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0828 18:21:57.289211   76486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:57.304829   76486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0828 18:21:57.323062   76486 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:57.326891   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:57.339775   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:57.463802   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:57.479266   76486 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552 for IP: 192.168.39.226
	I0828 18:21:57.479288   76486 certs.go:194] generating shared ca certs ...
	I0828 18:21:57.479325   76486 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:57.479519   76486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:57.479570   76486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:57.479584   76486 certs.go:256] generating profile certs ...
	I0828 18:21:57.479702   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/client.key
	I0828 18:21:57.479774   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key.90f46fd7
	I0828 18:21:57.479829   76486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key
	I0828 18:21:57.479977   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:57.480018   76486 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:57.480031   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:57.480071   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:57.480109   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:57.480142   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:57.480199   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:57.481063   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:57.514802   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:57.555506   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:57.585381   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:57.613009   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0828 18:21:57.637776   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:57.662590   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:57.684482   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:57.707287   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:57.728392   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:57.750217   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:57.771310   76486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:57.786814   76486 ssh_runner.go:195] Run: openssl version
	I0828 18:21:57.792053   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:57.802301   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806552   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806627   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.812238   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:57.824231   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:57.834783   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.838954   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.839008   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.844456   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:57.856262   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:57.867737   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872040   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872122   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.877506   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:57.889018   76486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:57.893303   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:57.899199   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:57.907716   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:57.915801   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:57.923795   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:57.929601   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:57.935563   76486 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:57.935655   76486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:57.935698   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:57.975236   76486 cri.go:89] found id: ""
	I0828 18:21:57.975308   76486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:57.986945   76486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:57.986966   76486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:57.987014   76486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:57.996355   76486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:57.997293   76486 kubeconfig.go:125] found "default-k8s-diff-port-640552" server: "https://192.168.39.226:8444"
	I0828 18:21:57.999257   76486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:58.008531   76486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.226
	I0828 18:21:58.008555   76486 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:58.008564   76486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:58.008612   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:58.054603   76486 cri.go:89] found id: ""
	I0828 18:21:58.054681   76486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:58.072017   76486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:58.085982   76486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:58.086007   76486 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:58.086087   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0828 18:21:58.094721   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:58.094790   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:58.108457   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0828 18:21:58.120495   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:58.120568   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:58.130432   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.139428   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:58.139495   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.148537   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0828 18:21:58.157182   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:58.157241   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:58.166178   76486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:58.175176   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:58.276043   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.072360   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.270937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.344719   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.442568   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:59.442664   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:59.942860   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:00.443271   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:58.485632   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:00.694313   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:00.694341   76435 pod_ready.go:82] duration metric: took 12.71594065s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.694354   76435 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210752   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.210805   76435 pod_ready.go:82] duration metric: took 516.442507ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210821   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218781   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.218809   76435 pod_ready.go:82] duration metric: took 7.979295ms for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218823   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725883   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.725914   76435 pod_ready.go:82] duration metric: took 507.08194ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725932   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731866   76435 pod_ready.go:93] pod "kube-proxy-4lz5q" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.731891   76435 pod_ready.go:82] duration metric: took 5.951323ms for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731903   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737160   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.737191   76435 pod_ready.go:82] duration metric: took 5.279341ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737203   76435 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.441679   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:00.442149   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:00.442178   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:00.442063   78246 retry.go:31] will retry after 2.175897132s: waiting for machine to come up
	I0828 18:22:02.620076   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:02.620562   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:02.620589   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:02.620527   78246 retry.go:31] will retry after 1.749248103s: waiting for machine to come up
	I0828 18:22:04.371390   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:04.371924   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:04.371969   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:04.371875   78246 retry.go:31] will retry after 2.412168623s: waiting for machine to come up
	I0828 18:22:00.943566   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.443708   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.943361   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.957227   76486 api_server.go:72] duration metric: took 2.514666609s to wait for apiserver process to appear ...
	I0828 18:22:01.957258   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:01.957281   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.174923   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.174955   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.174970   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.227506   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.227540   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.457869   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.463842   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.463884   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:04.957398   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.964576   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.964606   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:05.457724   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:05.461808   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:22:05.467732   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:05.467757   76486 api_server.go:131] duration metric: took 3.510492089s to wait for apiserver health ...
	I0828 18:22:05.467766   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:22:05.467771   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:05.469553   76486 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:05.470759   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:05.481858   76486 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:05.500789   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:05.512306   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:05.512336   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:05.512343   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:05.512353   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:05.512360   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:05.512368   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:05.512379   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:05.512386   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:05.512396   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:05.512405   76486 system_pods.go:74] duration metric: took 11.592471ms to wait for pod list to return data ...
	I0828 18:22:05.512419   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:05.516136   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:05.516167   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:05.516182   76486 node_conditions.go:105] duration metric: took 3.757746ms to run NodePressure ...
	I0828 18:22:05.516205   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:05.793448   76486 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798810   76486 kubeadm.go:739] kubelet initialised
	I0828 18:22:05.798827   76486 kubeadm.go:740] duration metric: took 5.35696ms waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798835   76486 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:05.803644   76486 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.808185   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808206   76486 pod_ready.go:82] duration metric: took 4.541551ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.808214   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808226   76486 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.812918   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812941   76486 pod_ready.go:82] duration metric: took 4.703246ms for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.812950   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812956   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.817019   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817036   76486 pod_ready.go:82] duration metric: took 4.075009ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.817045   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817050   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.904575   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904606   76486 pod_ready.go:82] duration metric: took 87.547744ms for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.904621   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904628   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.304141   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304168   76486 pod_ready.go:82] duration metric: took 399.53302ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.304177   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304182   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.704632   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704663   76486 pod_ready.go:82] duration metric: took 400.470144ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.704677   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704686   76486 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:07.104218   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104247   76486 pod_ready.go:82] duration metric: took 399.550393ms for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:07.104261   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104270   76486 pod_ready.go:39] duration metric: took 1.305425633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:07.104296   76486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:07.115055   76486 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:07.115077   76486 kubeadm.go:597] duration metric: took 9.128104649s to restartPrimaryControlPlane
	I0828 18:22:07.115085   76486 kubeadm.go:394] duration metric: took 9.179528813s to StartCluster
	I0828 18:22:07.115105   76486 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.115169   76486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:07.116738   76486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.116962   76486 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:07.117026   76486 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:07.117104   76486 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117121   76486 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117136   76486 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117150   76486 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:07.117175   76486 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-640552"
	I0828 18:22:07.117185   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117191   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:07.117166   76486 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117280   76486 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117291   76486 addons.go:243] addon metrics-server should already be in state true
	I0828 18:22:07.117316   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117551   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117585   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117607   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117622   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117666   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117687   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.118665   76486 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:07.119962   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0828 18:22:07.133468   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133474   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133473   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0828 18:22:07.133904   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.134022   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134039   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134044   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134055   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134378   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134405   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134416   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134425   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134582   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.134742   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134990   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135019   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.135331   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135358   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.142488   76486 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.142508   76486 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:07.142534   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.142790   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.142845   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.151553   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0828 18:22:07.152067   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.152561   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.152578   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.152988   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.153172   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.153267   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0828 18:22:07.153647   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.154071   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.154118   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.154424   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.154657   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.155656   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.156384   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.158167   76486 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:07.158170   76486 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:03.743115   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:06.246448   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:07.159313   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0828 18:22:07.159655   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.159730   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:07.159748   76486 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:07.159766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.159877   76486 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.159893   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:07.159908   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.160069   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.160087   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.160501   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.160999   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.161042   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.163522   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163560   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163954   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163960   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163980   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163989   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.164249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164451   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164455   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164575   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164746   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.164806   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.177679   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0828 18:22:07.178179   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.178711   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.178732   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.179027   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.179214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.180671   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.180897   76486 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.180912   76486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:07.180931   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.183194   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183530   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.183619   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183784   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.183935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.184064   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.184197   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.320359   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:07.338447   76486 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:07.422788   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.478274   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:07.478295   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:07.481718   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.539263   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:07.539287   76486 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:07.610393   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:07.610415   76486 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:07.671875   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:08.436371   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436397   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436468   76486 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.013643707s)
	I0828 18:22:08.436507   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436690   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436708   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436720   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436728   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436823   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.436836   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436848   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436857   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436866   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436939   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436952   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.437124   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.437174   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.437198   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.442852   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.442871   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.443116   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.443135   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601340   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601386   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601681   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.601728   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601743   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601753   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601998   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.602020   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.602030   76486 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-640552"
	I0828 18:22:08.603833   76486 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:06.787073   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:06.787468   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:06.787506   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:06.787418   78246 retry.go:31] will retry after 3.844761666s: waiting for machine to come up
	I0828 18:22:08.605028   76486 addons.go:510] duration metric: took 1.488006928s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:09.342263   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:11.990693   75908 start.go:364] duration metric: took 52.869802321s to acquireMachinesLock for "no-preload-072854"
	I0828 18:22:11.990749   75908 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:22:11.990756   75908 fix.go:54] fixHost starting: 
	I0828 18:22:11.991173   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:11.991211   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:12.008247   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0828 18:22:12.008729   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:12.009170   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:12.009193   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:12.009534   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:12.009732   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:12.009873   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:12.011416   75908 fix.go:112] recreateIfNeeded on no-preload-072854: state=Stopped err=<nil>
	I0828 18:22:12.011442   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	W0828 18:22:12.011599   75908 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:22:12.013401   75908 out.go:177] * Restarting existing kvm2 VM for "no-preload-072854" ...
	I0828 18:22:08.747994   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:11.243666   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:13.245991   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:10.635599   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.635992   77396 main.go:141] libmachine: (old-k8s-version-131737) Found IP for machine: 192.168.50.99
	I0828 18:22:10.636017   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserving static IP address...
	I0828 18:22:10.636035   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has current primary IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.636476   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserved static IP address: 192.168.50.99
	I0828 18:22:10.636507   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting for SSH to be available...
	I0828 18:22:10.636529   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.636550   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | skip adding static IP to network mk-old-k8s-version-131737 - found existing host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"}
	I0828 18:22:10.636565   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Getting to WaitForSSH function...
	I0828 18:22:10.638762   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639118   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.639150   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639274   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH client type: external
	I0828 18:22:10.639295   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa (-rw-------)
	I0828 18:22:10.639324   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:10.639340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | About to run SSH command:
	I0828 18:22:10.639368   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | exit 0
	I0828 18:22:10.765932   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:10.766339   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetConfigRaw
	I0828 18:22:10.767003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:10.769525   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770006   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.770045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770184   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:22:10.770396   77396 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:10.770418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:10.770671   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.772685   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773010   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.773031   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773182   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.773396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773583   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773739   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.773904   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.774112   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.774125   77396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:10.874115   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:10.874150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874366   77396 buildroot.go:166] provisioning hostname "old-k8s-version-131737"
	I0828 18:22:10.874396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874600   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.876804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877106   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.877132   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877237   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.877445   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877604   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877763   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.877921   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.878123   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.878139   77396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-131737 && echo "old-k8s-version-131737" | sudo tee /etc/hostname
	I0828 18:22:10.999107   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-131737
	
	I0828 18:22:10.999144   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.002327   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.002771   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.002802   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.003036   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.003221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003425   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003610   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.003769   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.003968   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.003986   77396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-131737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-131737/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-131737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:11.119461   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:11.119493   77396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:11.119523   77396 buildroot.go:174] setting up certificates
	I0828 18:22:11.119535   77396 provision.go:84] configureAuth start
	I0828 18:22:11.119547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:11.119813   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.122564   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.122916   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.122945   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.123121   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.125575   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.125946   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.125973   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.126103   77396 provision.go:143] copyHostCerts
	I0828 18:22:11.126169   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:11.126192   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:11.126258   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:11.126390   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:11.126416   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:11.126453   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:11.126551   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:11.126565   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:11.126596   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:11.126678   77396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-131737 san=[127.0.0.1 192.168.50.99 localhost minikube old-k8s-version-131737]
	I0828 18:22:11.382096   77396 provision.go:177] copyRemoteCerts
	I0828 18:22:11.382161   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:11.382189   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.384698   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.385071   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.385394   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.385527   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.385669   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.463818   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:11.487677   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0828 18:22:11.510454   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 18:22:11.532302   77396 provision.go:87] duration metric: took 412.75597ms to configureAuth
	I0828 18:22:11.532331   77396 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:11.532551   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:22:11.532627   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.535284   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535668   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.535700   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535816   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.536003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536138   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536317   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.536444   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.536599   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.536626   77396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:11.757267   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:11.757297   77396 machine.go:96] duration metric: took 986.887935ms to provisionDockerMachine
	I0828 18:22:11.757311   77396 start.go:293] postStartSetup for "old-k8s-version-131737" (driver="kvm2")
	I0828 18:22:11.757325   77396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:11.757341   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.757701   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:11.757761   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.760433   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760764   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.760804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760949   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.761117   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.761288   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.761467   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.842091   77396 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:11.846271   77396 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:11.846294   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:11.846357   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:11.846452   77396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:11.846590   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:11.856373   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:11.879153   77396 start.go:296] duration metric: took 121.830018ms for postStartSetup
	I0828 18:22:11.879193   77396 fix.go:56] duration metric: took 19.592124568s for fixHost
	I0828 18:22:11.879218   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.882110   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882588   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.882638   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882814   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.883017   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883241   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883383   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.883540   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.883704   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.883715   77396 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:11.990532   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869331.947970723
	
	I0828 18:22:11.990563   77396 fix.go:216] guest clock: 1724869331.947970723
	I0828 18:22:11.990574   77396 fix.go:229] Guest: 2024-08-28 18:22:11.947970723 +0000 UTC Remote: 2024-08-28 18:22:11.879198847 +0000 UTC m=+206.714077766 (delta=68.771876ms)
	I0828 18:22:11.990599   77396 fix.go:200] guest clock delta is within tolerance: 68.771876ms
	I0828 18:22:11.990605   77396 start.go:83] releasing machines lock for "old-k8s-version-131737", held for 19.703582254s
	I0828 18:22:11.990648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.990935   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.993283   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993690   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.993725   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993908   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994630   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994718   77396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:11.994768   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.994836   77396 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:11.994864   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.997521   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997693   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997952   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.997974   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998001   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.998022   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998251   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998384   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998466   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998650   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998665   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.998813   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:12.079201   77396 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:12.116862   77396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:12.268437   77396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:12.274689   77396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:12.274768   77396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:12.299532   77396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:12.299561   77396 start.go:495] detecting cgroup driver to use...
	I0828 18:22:12.299633   77396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:12.321322   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:12.336273   77396 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:12.336345   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:12.350625   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:12.364155   77396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:12.475639   77396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:12.636052   77396 docker.go:233] disabling docker service ...
	I0828 18:22:12.636144   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:12.655431   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:12.673744   77396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:12.865232   77396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:12.993530   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:13.006666   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:13.023529   77396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0828 18:22:13.023617   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.032944   77396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:13.033014   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.042494   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.052172   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.062869   77396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:13.073254   77396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:13.081968   77396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:13.082032   77396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:13.096163   77396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:13.106942   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:13.229752   77396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:13.333809   77396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:13.333870   77396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:13.339539   77396 start.go:563] Will wait 60s for crictl version
	I0828 18:22:13.339615   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:13.343618   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:13.387552   77396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:13.387647   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.417440   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.451222   77396 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0828 18:22:13.452432   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:13.455750   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456127   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:13.456158   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456465   77396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:13.460719   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:13.474168   77396 kubeadm.go:883] updating cluster {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:13.474315   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:22:13.474381   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:13.519869   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:13.519940   77396 ssh_runner.go:195] Run: which lz4
	I0828 18:22:13.524479   77396 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:22:13.528475   77396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:22:13.528511   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0828 18:22:15.039582   77396 crio.go:462] duration metric: took 1.515144029s to copy over tarball
	I0828 18:22:15.039666   77396 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:22:11.342592   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:13.343159   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:14.844412   76486 node_ready.go:49] node "default-k8s-diff-port-640552" has status "Ready":"True"
	I0828 18:22:14.844443   76486 node_ready.go:38] duration metric: took 7.505958149s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:14.844457   76486 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:14.852970   76486 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858426   76486 pod_ready.go:93] pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:14.858454   76486 pod_ready.go:82] duration metric: took 5.455024ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858467   76486 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:12.014690   75908 main.go:141] libmachine: (no-preload-072854) Calling .Start
	I0828 18:22:12.014870   75908 main.go:141] libmachine: (no-preload-072854) Ensuring networks are active...
	I0828 18:22:12.015716   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network default is active
	I0828 18:22:12.016229   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network mk-no-preload-072854 is active
	I0828 18:22:12.016663   75908 main.go:141] libmachine: (no-preload-072854) Getting domain xml...
	I0828 18:22:12.017534   75908 main.go:141] libmachine: (no-preload-072854) Creating domain...
	I0828 18:22:13.381018   75908 main.go:141] libmachine: (no-preload-072854) Waiting to get IP...
	I0828 18:22:13.381905   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.382463   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.382515   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.382439   78447 retry.go:31] will retry after 308.332294ms: waiting for machine to come up
	I0828 18:22:13.692047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.692496   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.692537   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.692434   78447 retry.go:31] will retry after 374.325088ms: waiting for machine to come up
	I0828 18:22:14.068154   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.068770   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.068799   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.068736   78447 retry.go:31] will retry after 465.939187ms: waiting for machine to come up
	I0828 18:22:14.536497   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.537032   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.537055   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.536989   78447 retry.go:31] will retry after 374.795357ms: waiting for machine to come up
	I0828 18:22:14.913413   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.914015   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.914047   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.913964   78447 retry.go:31] will retry after 726.118647ms: waiting for machine to come up
	I0828 18:22:15.641971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:15.642532   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:15.642559   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:15.642483   78447 retry.go:31] will retry after 951.90632ms: waiting for machine to come up
	I0828 18:22:15.745367   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.244292   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.094470   77396 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.054779864s)
	I0828 18:22:18.094500   77396 crio.go:469] duration metric: took 3.054883651s to extract the tarball
	I0828 18:22:18.094507   77396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:22:18.138235   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:18.172461   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:18.172484   77396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:18.172527   77396 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.172572   77396 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.172589   77396 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.172646   77396 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0828 18:22:18.172819   77396 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.172608   77396 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.172823   77396 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.172990   77396 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174545   77396 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.174579   77396 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.174598   77396 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0828 18:22:18.174609   77396 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.174904   77396 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.415540   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0828 18:22:18.461528   77396 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0828 18:22:18.461577   77396 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0828 18:22:18.461617   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.466065   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.471602   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.476041   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.480111   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.484307   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.500185   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.519236   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.538341   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.614022   77396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0828 18:22:18.614068   77396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.614150   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649875   77396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0828 18:22:18.649927   77396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.649945   77396 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0828 18:22:18.649976   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649980   77396 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.650035   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.665128   77396 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0828 18:22:18.665173   77396 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.665225   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686246   77396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0828 18:22:18.686288   77396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.686303   77396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0828 18:22:18.686336   77396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.686375   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686417   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.686339   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686483   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.686527   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.686558   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.686599   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775824   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775875   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.803911   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.803983   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0828 18:22:18.822129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.822230   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.822232   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.912309   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.912514   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.912662   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:19.003169   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003183   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0828 18:22:19.003201   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:19.003137   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:19.003292   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:19.108957   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0828 18:22:19.109000   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0828 18:22:19.109047   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0828 18:22:19.108961   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0828 18:22:19.109144   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0828 18:22:19.340554   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:19.486655   77396 cache_images.go:92] duration metric: took 1.314154463s to LoadCachedImages
	W0828 18:22:19.486742   77396 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0828 18:22:19.486760   77396 kubeadm.go:934] updating node { 192.168.50.99 8443 v1.20.0 crio true true} ...
	I0828 18:22:19.486898   77396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-131737 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:19.486979   77396 ssh_runner.go:195] Run: crio config
	I0828 18:22:19.530549   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:22:19.530579   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:19.530592   77396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:19.530621   77396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.99 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-131737 NodeName:old-k8s-version-131737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0828 18:22:19.530797   77396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-131737"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:19.530870   77396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0828 18:22:19.545081   77396 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:19.545179   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:19.558002   77396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0828 18:22:19.577056   77396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:19.595848   77396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0828 18:22:19.614164   77396 ssh_runner.go:195] Run: grep 192.168.50.99	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:19.618274   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.99	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:19.631776   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:19.775809   77396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:19.793491   77396 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737 for IP: 192.168.50.99
	I0828 18:22:19.793521   77396 certs.go:194] generating shared ca certs ...
	I0828 18:22:19.793544   77396 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:19.793722   77396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:19.793776   77396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:19.793788   77396 certs.go:256] generating profile certs ...
	I0828 18:22:19.793928   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.key
	I0828 18:22:19.793993   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key.131f8aa0
	I0828 18:22:19.794043   77396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key
	I0828 18:22:19.794211   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:19.794279   77396 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:19.794292   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:19.794322   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:19.794353   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:19.794379   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:19.794447   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:19.795621   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:19.831614   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:19.874281   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:19.927912   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:19.967892   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 18:22:20.010378   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:22:20.036730   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:20.064707   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:22:20.089246   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:20.116913   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:20.151729   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:20.174509   77396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:20.190911   77396 ssh_runner.go:195] Run: openssl version
	I0828 18:22:16.865253   76486 pod_ready.go:103] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:17.867833   76486 pod_ready.go:93] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.867859   76486 pod_ready.go:82] duration metric: took 3.009384484s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.867869   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.875975   76486 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.876008   76486 pod_ready.go:82] duration metric: took 8.131826ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.876022   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883334   76486 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.883363   76486 pod_ready.go:82] duration metric: took 1.007332551s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883377   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890003   76486 pod_ready.go:93] pod "kube-proxy-lmpft" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.890032   76486 pod_ready.go:82] duration metric: took 6.647273ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890045   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895629   76486 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.895658   76486 pod_ready.go:82] duration metric: took 5.60504ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895672   76486 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:16.595708   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:16.596190   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:16.596219   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:16.596152   78447 retry.go:31] will retry after 1.127921402s: waiting for machine to come up
	I0828 18:22:17.725174   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:17.725707   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:17.725736   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:17.725653   78447 retry.go:31] will retry after 959.892711ms: waiting for machine to come up
	I0828 18:22:18.686818   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:18.687269   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:18.687291   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:18.687225   78447 retry.go:31] will retry after 1.541922737s: waiting for machine to come up
	I0828 18:22:20.231099   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:20.231669   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:20.231697   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:20.231621   78447 retry.go:31] will retry after 1.601924339s: waiting for machine to come up
	I0828 18:22:20.743848   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:22.745091   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:20.198369   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:20.208787   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213735   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213798   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.219855   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:20.230970   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:20.243428   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248105   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248169   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.253803   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:20.264495   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:20.275530   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280118   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280179   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.286135   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:20.296995   77396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:20.302843   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:20.309214   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:20.314977   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:20.321177   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:20.327689   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:20.334176   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:20.340478   77396 kubeadm.go:392] StartCluster: {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:20.340589   77396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:20.340666   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.377288   77396 cri.go:89] found id: ""
	I0828 18:22:20.377366   77396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:20.387774   77396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:20.387796   77396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:20.387846   77396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:20.398086   77396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:20.399369   77396 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-131737" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:20.400118   77396 kubeconfig.go:62] /home/jenkins/minikube-integration/19529-10317/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-131737" cluster setting kubeconfig missing "old-k8s-version-131737" context setting]
	I0828 18:22:20.401248   77396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:20.464577   77396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:20.475116   77396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.99
	I0828 18:22:20.475161   77396 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:20.475172   77396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:20.475233   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.509801   77396 cri.go:89] found id: ""
	I0828 18:22:20.509881   77396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:20.527245   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:20.537526   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:20.537548   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:20.537603   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:20.546096   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:20.546168   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:20.555608   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:20.564344   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:20.564405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:20.573551   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.582191   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:20.582248   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.592105   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:20.601563   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:20.601624   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:20.612220   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:20.621113   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:20.738800   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.351223   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.564678   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.659764   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.748789   77396 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:21.748886   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.249370   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.749578   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.249982   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.749304   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.249774   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.749363   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:20.928806   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:23.402840   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:21.835332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:21.835849   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:21.835884   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:21.835787   78447 retry.go:31] will retry after 2.437330454s: waiting for machine to come up
	I0828 18:22:24.275082   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:24.275523   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:24.275553   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:24.275493   78447 retry.go:31] will retry after 2.288360059s: waiting for machine to come up
	I0828 18:22:26.564963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:26.565404   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:26.565432   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:26.565358   78447 retry.go:31] will retry after 2.911207221s: waiting for machine to come up
	I0828 18:22:25.243485   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:27.744153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:25.249675   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.749573   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.249942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.249956   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.749065   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.249309   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.749697   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.249151   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.749206   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.902220   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:28.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.402648   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:29.479385   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479953   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has current primary IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479975   75908 main.go:141] libmachine: (no-preload-072854) Found IP for machine: 192.168.61.138
	I0828 18:22:29.479988   75908 main.go:141] libmachine: (no-preload-072854) Reserving static IP address...
	I0828 18:22:29.480455   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.480476   75908 main.go:141] libmachine: (no-preload-072854) Reserved static IP address: 192.168.61.138
	I0828 18:22:29.480490   75908 main.go:141] libmachine: (no-preload-072854) DBG | skip adding static IP to network mk-no-preload-072854 - found existing host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"}
	I0828 18:22:29.480500   75908 main.go:141] libmachine: (no-preload-072854) DBG | Getting to WaitForSSH function...
	I0828 18:22:29.480509   75908 main.go:141] libmachine: (no-preload-072854) Waiting for SSH to be available...
	I0828 18:22:29.483163   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483478   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.483509   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483617   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH client type: external
	I0828 18:22:29.483636   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa (-rw-------)
	I0828 18:22:29.483673   75908 main.go:141] libmachine: (no-preload-072854) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:29.483691   75908 main.go:141] libmachine: (no-preload-072854) DBG | About to run SSH command:
	I0828 18:22:29.483705   75908 main.go:141] libmachine: (no-preload-072854) DBG | exit 0
	I0828 18:22:29.606048   75908 main.go:141] libmachine: (no-preload-072854) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:29.606410   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetConfigRaw
	I0828 18:22:29.607071   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.609374   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609733   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.609763   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609984   75908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/config.json ...
	I0828 18:22:29.610223   75908 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:29.610245   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:29.610451   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.612963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613409   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.613431   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.613688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613988   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.614165   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.614339   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.614355   75908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:29.714325   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:29.714360   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714596   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:22:29.714621   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714829   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.717545   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.717914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.717939   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.718102   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.718312   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718513   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718676   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.718848   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.719009   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.719026   75908 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-072854 && echo "no-preload-072854" | sudo tee /etc/hostname
	I0828 18:22:29.835992   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-072854
	
	I0828 18:22:29.836024   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.839134   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839621   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.839654   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839909   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.840128   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840324   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840540   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.840742   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.840973   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.841005   75908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-072854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-072854/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-072854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:29.951089   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:29.951125   75908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:29.951149   75908 buildroot.go:174] setting up certificates
	I0828 18:22:29.951162   75908 provision.go:84] configureAuth start
	I0828 18:22:29.951178   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.951496   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.954309   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954663   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.954694   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.957076   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957345   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.957365   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957550   75908 provision.go:143] copyHostCerts
	I0828 18:22:29.957606   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:29.957624   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:29.957683   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:29.957792   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:29.957807   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:29.957831   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:29.957913   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:29.957924   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:29.957951   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:29.958060   75908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.no-preload-072854 san=[127.0.0.1 192.168.61.138 localhost minikube no-preload-072854]
	I0828 18:22:30.038643   75908 provision.go:177] copyRemoteCerts
	I0828 18:22:30.038705   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:30.038730   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.041574   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.041914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.041946   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.042125   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.042306   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.042460   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.042618   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.124224   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:30.148835   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0828 18:22:30.171599   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:22:30.195349   75908 provision.go:87] duration metric: took 244.171371ms to configureAuth
	I0828 18:22:30.195375   75908 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:30.195580   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:30.195665   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.198535   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.198938   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.198961   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.199171   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.199349   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199490   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199727   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.199917   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.200104   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.200125   75908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:30.422282   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:30.422314   75908 machine.go:96] duration metric: took 812.07707ms to provisionDockerMachine
	I0828 18:22:30.422328   75908 start.go:293] postStartSetup for "no-preload-072854" (driver="kvm2")
	I0828 18:22:30.422341   75908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:30.422361   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.422658   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:30.422688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.425627   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426006   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.426047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426199   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.426401   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.426539   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.426675   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.508399   75908 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:30.512395   75908 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:30.512418   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:30.512505   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:30.512603   75908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:30.512723   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:30.522105   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:30.545166   75908 start.go:296] duration metric: took 122.822966ms for postStartSetup
	I0828 18:22:30.545203   75908 fix.go:56] duration metric: took 18.554447914s for fixHost
	I0828 18:22:30.545221   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.548255   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548658   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.548683   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548867   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.549078   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549251   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549378   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.549555   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.549774   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.549788   75908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:30.650663   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869350.622150588
	
	I0828 18:22:30.650688   75908 fix.go:216] guest clock: 1724869350.622150588
	I0828 18:22:30.650699   75908 fix.go:229] Guest: 2024-08-28 18:22:30.622150588 +0000 UTC Remote: 2024-08-28 18:22:30.545207555 +0000 UTC m=+354.015941485 (delta=76.943033ms)
	I0828 18:22:30.650723   75908 fix.go:200] guest clock delta is within tolerance: 76.943033ms
	I0828 18:22:30.650741   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 18.660017717s
	I0828 18:22:30.650770   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.651011   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:30.653715   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654110   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.654150   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654274   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.654882   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655093   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655173   75908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:30.655235   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.655319   75908 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:30.655339   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.658052   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658097   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658440   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658470   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658507   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658520   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658677   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658804   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658899   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659098   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659131   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659272   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659276   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.659426   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.769716   75908 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:30.775522   75908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:30.918471   75908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:30.924338   75908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:30.924416   75908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:30.939462   75908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:30.939489   75908 start.go:495] detecting cgroup driver to use...
	I0828 18:22:30.939589   75908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:30.956324   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:30.970243   75908 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:30.970319   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:30.983636   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:30.996989   75908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:31.116994   75908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:31.290216   75908 docker.go:233] disabling docker service ...
	I0828 18:22:31.290291   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:31.305578   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:31.318402   75908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:31.446431   75908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:31.570180   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:31.583862   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:31.602513   75908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:22:31.602577   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.613726   75908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:31.613798   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.627405   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.638648   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.648905   75908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:31.660365   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.670925   75908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.689052   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.699345   75908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:31.708691   75908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:31.708753   75908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:31.721500   75908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:31.730798   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:31.858773   75908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:31.945345   75908 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:31.945419   75908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:31.949720   75908 start.go:563] Will wait 60s for crictl version
	I0828 18:22:31.949784   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:31.953193   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:31.990360   75908 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:31.990440   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.019756   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.048117   75908 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:22:29.744207   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.243511   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.249883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:30.749652   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.249973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.249415   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.749545   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.249768   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.749104   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.249819   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.749727   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.901907   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:34.907432   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.049494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:32.052227   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052548   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:32.052585   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052800   75908 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:32.056788   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:32.068700   75908 kubeadm.go:883] updating cluster {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:32.068814   75908 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:22:32.068847   75908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:32.103085   75908 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:22:32.103111   75908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:32.103153   75908 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.103194   75908 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.103240   75908 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.103260   75908 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.103331   75908 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.103379   75908 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.103433   75908 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.103242   75908 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104775   75908 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.104806   75908 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.104829   75908 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.104777   75908 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.104781   75908 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.343173   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0828 18:22:32.343209   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.409616   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.418908   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.447831   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.453065   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.453813   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.494045   75908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0828 18:22:32.494090   75908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0828 18:22:32.494121   75908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.494122   75908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.494157   75908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0828 18:22:32.494168   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494169   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494179   75908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.494209   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546592   75908 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0828 18:22:32.546634   75908 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.546655   75908 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0828 18:22:32.546682   75908 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.546698   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546724   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546807   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.546829   75908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0828 18:22:32.546849   75908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.546880   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.546891   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546910   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.557550   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.593306   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.593328   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.648848   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.648913   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.648922   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.648973   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.704513   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.717712   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.779954   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.780015   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.780080   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.780148   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.814614   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.821580   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0828 18:22:32.821660   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.901464   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0828 18:22:32.901584   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:32.905004   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0828 18:22:32.905036   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0828 18:22:32.905102   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:32.905103   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0828 18:22:32.905144   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0828 18:22:32.905160   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905190   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905105   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:32.905191   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:32.905205   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.907869   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0828 18:22:33.324215   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292175   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.386961854s)
	I0828 18:22:35.292205   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0828 18:22:35.292234   75908 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292245   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.387114296s)
	I0828 18:22:35.292273   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0828 18:22:35.292301   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292314   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.386985678s)
	I0828 18:22:35.292354   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0828 18:22:35.292358   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.387036145s)
	I0828 18:22:35.292367   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.387143897s)
	I0828 18:22:35.292375   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0828 18:22:35.292385   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0828 18:22:35.292409   75908 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.968164241s)
	I0828 18:22:35.292446   75908 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0828 18:22:35.292456   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:35.292479   75908 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292536   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:34.243832   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:36.744323   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:35.249587   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:35.749826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.249647   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.749792   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.249845   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.249577   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.749412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.249047   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.749564   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.402943   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:39.901715   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:37.064442   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.772111922s)
	I0828 18:22:37.064476   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0828 18:22:37.064498   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.064500   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.772021571s)
	I0828 18:22:37.064529   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0828 18:22:37.064536   75908 ssh_runner.go:235] Completed: which crictl: (1.771982077s)
	I0828 18:22:37.064603   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:37.064550   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.121169   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933342   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.868675318s)
	I0828 18:22:38.933379   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0828 18:22:38.933390   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.812184072s)
	I0828 18:22:38.933486   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933400   75908 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.933543   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.983461   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0828 18:22:38.983579   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:39.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:41.243732   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:40.249307   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:40.749120   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.249107   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.749895   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.249941   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.748952   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.249788   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.749898   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.249654   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.749350   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.903470   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:44.403257   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:42.534353   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.550744503s)
	I0828 18:22:42.534392   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0828 18:22:42.534430   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600866705s)
	I0828 18:22:42.534448   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0828 18:22:42.534472   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:42.534521   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:44.602703   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.068154029s)
	I0828 18:22:44.602738   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0828 18:22:44.602765   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:44.602809   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:45.948751   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.345914789s)
	I0828 18:22:45.948794   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0828 18:22:45.948821   75908 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:45.948874   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:43.742979   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.743892   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:47.745070   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.249353   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:45.749091   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.249897   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.748991   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.249385   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.749204   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.248962   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.749853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.249574   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.749028   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.403322   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:48.902485   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:46.594343   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0828 18:22:46.594405   75908 cache_images.go:123] Successfully loaded all cached images
	I0828 18:22:46.594413   75908 cache_images.go:92] duration metric: took 14.491290737s to LoadCachedImages
	I0828 18:22:46.594428   75908 kubeadm.go:934] updating node { 192.168.61.138 8443 v1.31.0 crio true true} ...
	I0828 18:22:46.594562   75908 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-072854 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:46.594627   75908 ssh_runner.go:195] Run: crio config
	I0828 18:22:46.641210   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:46.641230   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:46.641240   75908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:46.641260   75908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-072854 NodeName:no-preload-072854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:22:46.641417   75908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-072854"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:46.641507   75908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:22:46.653042   75908 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:46.653110   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:46.671775   75908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0828 18:22:46.691485   75908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:46.707525   75908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0828 18:22:46.723642   75908 ssh_runner.go:195] Run: grep 192.168.61.138	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:46.727148   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:46.738598   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:46.877354   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:46.896287   75908 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854 for IP: 192.168.61.138
	I0828 18:22:46.896309   75908 certs.go:194] generating shared ca certs ...
	I0828 18:22:46.896324   75908 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:46.896488   75908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:46.896543   75908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:46.896578   75908 certs.go:256] generating profile certs ...
	I0828 18:22:46.896694   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/client.key
	I0828 18:22:46.896777   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key.f9122682
	I0828 18:22:46.896833   75908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key
	I0828 18:22:46.896945   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:46.896975   75908 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:46.896984   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:46.897006   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:46.897028   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:46.897050   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:46.897086   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:46.897777   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:46.940603   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:46.971255   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:47.009269   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:47.043849   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 18:22:47.081562   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 18:22:47.104248   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:47.127680   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 18:22:47.150718   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:47.171449   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:47.192814   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:47.213607   75908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:47.229589   75908 ssh_runner.go:195] Run: openssl version
	I0828 18:22:47.235107   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:47.245976   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250512   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250568   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.256305   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:47.267080   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:47.276961   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281311   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281388   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.286823   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:47.298010   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:47.309303   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313555   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313604   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.319146   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:47.329851   75908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:47.333891   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:47.339544   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:47.344883   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:47.350419   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:47.355560   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:47.360987   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:47.366392   75908 kubeadm.go:392] StartCluster: {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:47.366472   75908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:47.366518   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.407218   75908 cri.go:89] found id: ""
	I0828 18:22:47.407283   75908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:47.418518   75908 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:47.418541   75908 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:47.418599   75908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:47.429592   75908 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:47.430649   75908 kubeconfig.go:125] found "no-preload-072854" server: "https://192.168.61.138:8443"
	I0828 18:22:47.432727   75908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:47.443042   75908 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.138
	I0828 18:22:47.443072   75908 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:47.443084   75908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:47.443132   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.483840   75908 cri.go:89] found id: ""
	I0828 18:22:47.483906   75908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:47.499558   75908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:47.508932   75908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:47.508954   75908 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:47.508998   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:47.519003   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:47.519082   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:47.528248   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:47.536682   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:47.536744   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:47.545411   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.553945   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:47.554005   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.562837   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:47.571080   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:47.571141   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:47.579788   75908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:47.590221   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:47.707814   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.459935   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.669459   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.772934   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.886910   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:48.887010   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.387963   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.887167   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.923097   75908 api_server.go:72] duration metric: took 1.036200671s to wait for apiserver process to appear ...
	I0828 18:22:49.923147   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:49.923182   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:50.244153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.245033   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.835389   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:52.835424   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:52.835439   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.938497   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.938528   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:52.938541   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.943233   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.943256   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.423531   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.428654   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.428675   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.924251   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.963729   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.963759   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:54.423241   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:54.430345   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:22:54.436835   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:54.436858   75908 api_server.go:131] duration metric: took 4.513702157s to wait for apiserver health ...
	I0828 18:22:54.436867   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:54.436873   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:54.438482   75908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:50.249726   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:50.749045   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.249609   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.749060   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.249827   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.748985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.248958   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.748960   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.249581   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.749175   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.404355   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:53.904030   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:54.439656   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:54.453060   75908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:54.473537   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:54.489302   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:54.489340   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:54.489352   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:54.489369   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:54.489380   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:54.489392   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:54.489404   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:54.489414   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:54.489425   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:54.489434   75908 system_pods.go:74] duration metric: took 15.875803ms to wait for pod list to return data ...
	I0828 18:22:54.489446   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:54.494398   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:54.494428   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:54.494441   75908 node_conditions.go:105] duration metric: took 4.987547ms to run NodePressure ...
	I0828 18:22:54.494462   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:54.766427   75908 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771542   75908 kubeadm.go:739] kubelet initialised
	I0828 18:22:54.771571   75908 kubeadm.go:740] duration metric: took 5.116897ms waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771582   75908 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:54.777783   75908 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.787163   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787193   75908 pod_ready.go:82] duration metric: took 9.382038ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.787205   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787215   75908 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.791786   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791810   75908 pod_ready.go:82] duration metric: took 4.586002ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.791818   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791826   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.796201   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796220   75908 pod_ready.go:82] duration metric: took 4.388906ms for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.796228   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796234   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.877071   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877104   75908 pod_ready.go:82] duration metric: took 80.86176ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.877118   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877127   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.277179   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277206   75908 pod_ready.go:82] duration metric: took 400.069901ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.277215   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277223   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.676857   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676887   75908 pod_ready.go:82] duration metric: took 399.658558ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.676898   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676904   75908 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:56.077491   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077525   75908 pod_ready.go:82] duration metric: took 400.610612ms for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:56.077535   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077543   75908 pod_ready.go:39] duration metric: took 1.305948645s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:56.077559   75908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:56.090851   75908 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:56.090878   75908 kubeadm.go:597] duration metric: took 8.672328864s to restartPrimaryControlPlane
	I0828 18:22:56.090889   75908 kubeadm.go:394] duration metric: took 8.724501209s to StartCluster
	I0828 18:22:56.090909   75908 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.090980   75908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:56.092859   75908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.093177   75908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:56.093304   75908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:56.093391   75908 addons.go:69] Setting storage-provisioner=true in profile "no-preload-072854"
	I0828 18:22:56.093386   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:56.093415   75908 addons.go:69] Setting default-storageclass=true in profile "no-preload-072854"
	I0828 18:22:56.093472   75908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-072854"
	I0828 18:22:56.093457   75908 addons.go:69] Setting metrics-server=true in profile "no-preload-072854"
	I0828 18:22:56.093501   75908 addons.go:234] Setting addon metrics-server=true in "no-preload-072854"
	I0828 18:22:56.093429   75908 addons.go:234] Setting addon storage-provisioner=true in "no-preload-072854"
	W0828 18:22:56.093516   75908 addons.go:243] addon metrics-server should already be in state true
	W0828 18:22:56.093518   75908 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093869   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093904   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093994   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.094069   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.094796   75908 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:56.096268   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:56.110476   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0828 18:22:56.110685   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0828 18:22:56.110791   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0828 18:22:56.111030   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111183   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111453   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111592   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111603   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111710   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111720   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111820   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111839   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111892   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112043   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112214   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112402   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.112440   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112474   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.112669   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112711   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.115984   75908 addons.go:234] Setting addon default-storageclass=true in "no-preload-072854"
	W0828 18:22:56.116000   75908 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:56.116020   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.116245   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.116280   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.127848   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35747
	I0828 18:22:56.134902   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.135863   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.135892   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.136351   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.136536   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.138800   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.140837   75908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:56.142271   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:56.142290   75908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:56.142311   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.145770   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146271   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.146332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146572   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.146787   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.146958   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.147097   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.158402   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I0828 18:22:56.158948   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.159531   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.159555   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.159622   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0828 18:22:56.160033   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.160108   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.160578   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.160608   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.160864   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.160876   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.161318   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.161543   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.163449   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.165347   75908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:56.166532   75908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.166547   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:56.166564   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.170058   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170510   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.170536   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170718   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.170900   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.171055   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.171193   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.177056   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I0828 18:22:56.177458   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.177969   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.178001   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.178335   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.178537   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.180056   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.180261   75908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.180274   75908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:56.180288   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.182971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183550   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.183576   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183726   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.183879   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.184042   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.184212   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.333329   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:56.363605   75908 node_ready.go:35] waiting up to 6m0s for node "no-preload-072854" to be "Ready" ...
	I0828 18:22:56.444569   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:56.444591   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:56.466266   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:56.466288   75908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:56.472695   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.494468   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:56.494496   75908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:56.499713   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.549699   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:57.391629   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391655   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.391634   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391724   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392046   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392063   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392072   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392068   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392080   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392108   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392046   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392127   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392144   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392152   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392322   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392336   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.393780   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.393802   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.393846   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.397916   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.397937   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.398164   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.398183   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.398202   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520056   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520082   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520358   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520373   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520392   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520435   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520458   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520699   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520714   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520725   75908 addons.go:475] Verifying addon metrics-server=true in "no-preload-072854"
	I0828 18:22:57.522537   75908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:54.742708   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:56.744595   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:55.248933   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:55.749502   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.249976   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.749648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.249544   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.749769   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.249492   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.749787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.249693   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.749781   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.402039   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:58.901738   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:57.523745   75908 addons.go:510] duration metric: took 1.430442724s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:58.367342   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:00.867911   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:59.243496   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:01.244209   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:00.249249   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.749724   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.248973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.748932   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.249474   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.749966   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.249404   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.248943   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.749828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.902675   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:03.402001   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:02.868286   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:03.367260   75908 node_ready.go:49] node "no-preload-072854" has status "Ready":"True"
	I0828 18:23:03.367286   75908 node_ready.go:38] duration metric: took 7.003649083s for node "no-preload-072854" to be "Ready" ...
	I0828 18:23:03.367296   75908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:23:03.372211   75908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376919   75908 pod_ready.go:93] pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.376944   75908 pod_ready.go:82] duration metric: took 4.710919ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376954   75908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381043   75908 pod_ready.go:93] pod "etcd-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.381066   75908 pod_ready.go:82] duration metric: took 4.10571ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381078   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:05.388413   75908 pod_ready.go:103] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.387040   75908 pod_ready.go:93] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.387060   75908 pod_ready.go:82] duration metric: took 3.005974723s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.387070   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391257   75908 pod_ready.go:93] pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.391276   75908 pod_ready.go:82] duration metric: took 4.19923ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391285   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396819   75908 pod_ready.go:93] pod "kube-proxy-tfxfd" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.396836   75908 pod_ready.go:82] duration metric: took 5.545346ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396845   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.743752   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.242657   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.243781   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:05.249882   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.749888   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.249648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.749518   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.249032   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.249738   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.749748   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.249670   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.749246   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.906344   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.401488   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.402915   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.568922   75908 pod_ready.go:93] pod "kube-scheduler-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.568948   75908 pod_ready.go:82] duration metric: took 172.096644ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.568964   75908 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:08.574813   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.576583   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.743641   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.243152   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.249340   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:10.749798   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.249721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.249779   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.249760   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.749029   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.249441   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.749641   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.903188   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.401514   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.076559   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.575593   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.742772   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.743273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.249678   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:15.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.249786   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.748968   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.249139   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.749721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.249749   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.749731   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.249576   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.749644   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.402418   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.902446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.575692   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.576073   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.744432   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.243417   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:20.249682   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:20.748965   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.249378   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.749011   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:21.749077   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:21.783557   77396 cri.go:89] found id: ""
	I0828 18:23:21.783581   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.783592   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:21.783600   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:21.783667   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:21.816332   77396 cri.go:89] found id: ""
	I0828 18:23:21.816366   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.816377   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:21.816385   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:21.816451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:21.850130   77396 cri.go:89] found id: ""
	I0828 18:23:21.850157   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.850168   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:21.850175   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:21.850240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:21.887000   77396 cri.go:89] found id: ""
	I0828 18:23:21.887028   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.887037   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:21.887045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:21.887106   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:21.922052   77396 cri.go:89] found id: ""
	I0828 18:23:21.922095   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.922106   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:21.922114   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:21.922169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:21.968838   77396 cri.go:89] found id: ""
	I0828 18:23:21.968865   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.968872   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:21.968879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:21.968937   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:22.005361   77396 cri.go:89] found id: ""
	I0828 18:23:22.005387   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.005397   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:22.005404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:22.005465   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:22.043999   77396 cri.go:89] found id: ""
	I0828 18:23:22.044026   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.044034   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:22.044042   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:22.044054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:22.092612   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:22.092641   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:22.105847   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:22.105870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:22.230236   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:22.230254   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:22.230267   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:22.305648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:22.305712   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:24.843524   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:24.856321   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:24.856412   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:24.891356   77396 cri.go:89] found id: ""
	I0828 18:23:24.891395   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.891406   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:24.891414   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:24.891476   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:24.923476   77396 cri.go:89] found id: ""
	I0828 18:23:24.923504   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.923515   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:24.923522   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:24.923583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:24.955453   77396 cri.go:89] found id: ""
	I0828 18:23:24.955482   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.955493   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:24.955499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:24.955564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:24.991349   77396 cri.go:89] found id: ""
	I0828 18:23:24.991377   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.991384   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:24.991394   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:24.991448   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:25.026464   77396 cri.go:89] found id: ""
	I0828 18:23:25.026493   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.026501   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:25.026508   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:25.026559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:25.066989   77396 cri.go:89] found id: ""
	I0828 18:23:25.067021   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.067045   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:25.067053   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:25.067123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:25.111327   77396 cri.go:89] found id: ""
	I0828 18:23:25.111358   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.111369   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:25.111377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:25.111442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:25.159672   77396 cri.go:89] found id: ""
	I0828 18:23:25.159698   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.159707   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:25.159715   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:25.159726   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:21.902745   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.075480   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.575344   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.743311   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.743442   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:25.216755   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:25.216788   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:25.230365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:25.230399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:25.303227   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:25.303253   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:25.303276   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:25.378467   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:25.378501   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:27.915420   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:27.927659   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:27.927726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:27.961535   77396 cri.go:89] found id: ""
	I0828 18:23:27.961560   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.961568   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:27.961573   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:27.961618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:27.993707   77396 cri.go:89] found id: ""
	I0828 18:23:27.993732   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.993739   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:27.993745   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:27.993792   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:28.027410   77396 cri.go:89] found id: ""
	I0828 18:23:28.027438   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.027445   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:28.027451   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:28.027509   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:28.063874   77396 cri.go:89] found id: ""
	I0828 18:23:28.063909   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.063918   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:28.063924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:28.063974   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:28.096726   77396 cri.go:89] found id: ""
	I0828 18:23:28.096755   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.096763   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:28.096769   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:28.096826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:28.129538   77396 cri.go:89] found id: ""
	I0828 18:23:28.129562   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.129570   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:28.129576   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:28.129633   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:28.167785   77396 cri.go:89] found id: ""
	I0828 18:23:28.167813   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.167821   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:28.167827   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:28.167881   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:28.200417   77396 cri.go:89] found id: ""
	I0828 18:23:28.200445   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.200456   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:28.200467   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:28.200481   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:28.214025   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:28.214054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:28.280106   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:28.280126   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:28.280139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:28.359834   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:28.359875   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:28.399997   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:28.400028   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:26.902287   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.403446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.576035   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.075134   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.080674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:28.744552   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.243825   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:30.950870   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:30.967367   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:30.967426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:31.007843   77396 cri.go:89] found id: ""
	I0828 18:23:31.007873   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.007882   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:31.007890   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:31.007949   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:31.056710   77396 cri.go:89] found id: ""
	I0828 18:23:31.056744   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.056756   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:31.056764   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:31.056824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:31.101177   77396 cri.go:89] found id: ""
	I0828 18:23:31.101208   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.101218   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:31.101225   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:31.101283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:31.135513   77396 cri.go:89] found id: ""
	I0828 18:23:31.135548   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.135560   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:31.135568   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:31.135635   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:31.172887   77396 cri.go:89] found id: ""
	I0828 18:23:31.172921   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.172932   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:31.172939   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:31.173006   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:31.207744   77396 cri.go:89] found id: ""
	I0828 18:23:31.207775   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.207788   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:31.207795   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:31.207873   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:31.242954   77396 cri.go:89] found id: ""
	I0828 18:23:31.242984   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.242995   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:31.243003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:31.243063   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:31.277382   77396 cri.go:89] found id: ""
	I0828 18:23:31.277418   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.277427   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:31.277436   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:31.277448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.315688   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:31.315722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:31.367565   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:31.367596   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:31.380803   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:31.380839   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:31.447184   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:31.447214   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:31.447229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.022521   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:34.036551   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:34.036615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:34.074735   77396 cri.go:89] found id: ""
	I0828 18:23:34.074763   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.074772   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:34.074780   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:34.074836   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:34.113604   77396 cri.go:89] found id: ""
	I0828 18:23:34.113631   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.113642   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:34.113649   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:34.113711   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:34.152658   77396 cri.go:89] found id: ""
	I0828 18:23:34.152687   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.152701   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:34.152707   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:34.152753   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:34.188748   77396 cri.go:89] found id: ""
	I0828 18:23:34.188775   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.188784   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:34.188789   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:34.188847   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:34.221553   77396 cri.go:89] found id: ""
	I0828 18:23:34.221584   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.221595   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:34.221602   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:34.221666   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:34.257809   77396 cri.go:89] found id: ""
	I0828 18:23:34.257833   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.257843   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:34.257850   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:34.257935   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:34.291217   77396 cri.go:89] found id: ""
	I0828 18:23:34.291246   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.291253   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:34.291261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:34.291327   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:34.324084   77396 cri.go:89] found id: ""
	I0828 18:23:34.324114   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.324122   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:34.324133   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:34.324147   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:34.373802   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:34.373838   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:34.386779   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:34.386807   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:34.457396   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:34.457413   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:34.457428   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.531549   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:34.531590   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.901633   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:34.402475   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.576038   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:36.075226   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:35.743297   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.744669   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.068985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:37.083317   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:37.083383   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:37.117109   77396 cri.go:89] found id: ""
	I0828 18:23:37.117144   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.117156   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:37.117164   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:37.117225   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:37.150151   77396 cri.go:89] found id: ""
	I0828 18:23:37.150180   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.150189   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:37.150194   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:37.150249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:37.184263   77396 cri.go:89] found id: ""
	I0828 18:23:37.184289   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.184298   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:37.184303   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:37.184358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:37.214442   77396 cri.go:89] found id: ""
	I0828 18:23:37.214468   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.214476   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:37.214481   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:37.214545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:37.251690   77396 cri.go:89] found id: ""
	I0828 18:23:37.251723   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.251732   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:37.251738   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:37.251790   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:37.286900   77396 cri.go:89] found id: ""
	I0828 18:23:37.286929   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.286939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:37.286946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:37.287026   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:37.324010   77396 cri.go:89] found id: ""
	I0828 18:23:37.324039   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.324049   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:37.324057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:37.324114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:37.359723   77396 cri.go:89] found id: ""
	I0828 18:23:37.359777   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.359785   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:37.359813   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:37.359829   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:37.411363   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:37.411395   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:37.425078   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:37.425108   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:37.498351   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:37.498374   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:37.498399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:37.580149   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:37.580187   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:40.119822   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:40.134555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:40.134613   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:40.173129   77396 cri.go:89] found id: ""
	I0828 18:23:40.173156   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.173164   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:40.173170   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:40.173218   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:36.902004   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:39.401256   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:38.575639   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.575835   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.243909   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.743492   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.205445   77396 cri.go:89] found id: ""
	I0828 18:23:40.205470   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.205477   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:40.205482   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:40.205536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:40.237018   77396 cri.go:89] found id: ""
	I0828 18:23:40.237046   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.237057   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:40.237064   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:40.237124   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:40.271188   77396 cri.go:89] found id: ""
	I0828 18:23:40.271220   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.271232   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:40.271239   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:40.271302   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:40.304532   77396 cri.go:89] found id: ""
	I0828 18:23:40.304566   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.304577   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:40.304585   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:40.304652   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:40.338114   77396 cri.go:89] found id: ""
	I0828 18:23:40.338145   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.338156   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:40.338165   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:40.338227   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:40.370126   77396 cri.go:89] found id: ""
	I0828 18:23:40.370160   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.370176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:40.370184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:40.370247   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:40.406139   77396 cri.go:89] found id: ""
	I0828 18:23:40.406167   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.406176   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:40.406186   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:40.406201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:40.459364   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:40.459404   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:40.472467   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:40.472496   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:40.546389   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:40.546420   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:40.546438   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:40.628550   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:40.628586   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:43.170210   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:43.183441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:43.183516   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:43.215798   77396 cri.go:89] found id: ""
	I0828 18:23:43.215823   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.215834   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:43.215841   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:43.215905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:43.250001   77396 cri.go:89] found id: ""
	I0828 18:23:43.250027   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.250035   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:43.250041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:43.250110   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:43.284621   77396 cri.go:89] found id: ""
	I0828 18:23:43.284654   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.284662   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:43.284668   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:43.284716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:43.318780   77396 cri.go:89] found id: ""
	I0828 18:23:43.318805   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.318815   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:43.318821   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:43.318866   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:43.351788   77396 cri.go:89] found id: ""
	I0828 18:23:43.351810   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.351818   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:43.351823   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:43.351872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:43.388719   77396 cri.go:89] found id: ""
	I0828 18:23:43.388745   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.388755   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:43.388761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:43.388810   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:43.423250   77396 cri.go:89] found id: ""
	I0828 18:23:43.423273   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.423283   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:43.423290   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:43.423376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:43.464644   77396 cri.go:89] found id: ""
	I0828 18:23:43.464672   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.464683   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:43.464693   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:43.464708   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:43.517422   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:43.517457   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:43.530317   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:43.530342   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:43.599776   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:43.599795   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:43.599806   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:43.679377   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:43.679409   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:41.401619   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:43.403142   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.576264   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.076333   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.242626   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.243310   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:46.215985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:46.229564   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:46.229632   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:46.267425   77396 cri.go:89] found id: ""
	I0828 18:23:46.267453   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.267464   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:46.267472   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:46.267534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:46.302532   77396 cri.go:89] found id: ""
	I0828 18:23:46.302562   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.302573   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:46.302580   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:46.302645   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:46.338197   77396 cri.go:89] found id: ""
	I0828 18:23:46.338226   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.338237   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:46.338244   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:46.338305   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:46.371503   77396 cri.go:89] found id: ""
	I0828 18:23:46.371528   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.371535   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:46.371542   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:46.371606   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:46.406364   77396 cri.go:89] found id: ""
	I0828 18:23:46.406386   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.406399   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:46.406405   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:46.406451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:46.441519   77396 cri.go:89] found id: ""
	I0828 18:23:46.441547   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.441557   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:46.441565   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:46.441626   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:46.475413   77396 cri.go:89] found id: ""
	I0828 18:23:46.475445   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.475455   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:46.475465   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:46.475531   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:46.508722   77396 cri.go:89] found id: ""
	I0828 18:23:46.508752   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.508762   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:46.508772   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:46.508790   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:46.564737   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:46.564776   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:46.578833   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:46.578860   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:46.649533   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:46.649554   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:46.649566   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:46.725738   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:46.725780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.263052   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:49.275342   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:49.275403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:49.310092   77396 cri.go:89] found id: ""
	I0828 18:23:49.310121   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.310131   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:49.310138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:49.310200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:49.347624   77396 cri.go:89] found id: ""
	I0828 18:23:49.347649   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.347657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:49.347662   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:49.347708   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:49.383801   77396 cri.go:89] found id: ""
	I0828 18:23:49.383827   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.383834   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:49.383840   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:49.383889   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:49.420443   77396 cri.go:89] found id: ""
	I0828 18:23:49.420470   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.420478   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:49.420484   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:49.420536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:49.452225   77396 cri.go:89] found id: ""
	I0828 18:23:49.452247   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.452255   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:49.452260   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:49.452306   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:49.486137   77396 cri.go:89] found id: ""
	I0828 18:23:49.486164   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.486172   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:49.486178   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:49.486224   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:49.519081   77396 cri.go:89] found id: ""
	I0828 18:23:49.519115   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.519126   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:49.519137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:49.519199   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:49.552903   77396 cri.go:89] found id: ""
	I0828 18:23:49.552932   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.552940   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:49.552948   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:49.552962   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:49.623963   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:49.624000   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:49.624023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:49.700684   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:49.700722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.738241   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:49.738265   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:49.786941   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:49.786976   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:45.901814   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.903106   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.905017   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.575690   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.576689   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.243535   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:51.243843   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:53.244097   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.300380   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:52.314281   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:52.314347   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:52.348497   77396 cri.go:89] found id: ""
	I0828 18:23:52.348522   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.348532   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:52.348539   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:52.348605   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:52.382060   77396 cri.go:89] found id: ""
	I0828 18:23:52.382107   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.382119   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:52.382127   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:52.382242   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:52.414306   77396 cri.go:89] found id: ""
	I0828 18:23:52.414335   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.414348   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:52.414356   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:52.414424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:52.448965   77396 cri.go:89] found id: ""
	I0828 18:23:52.448995   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.449005   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:52.449012   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:52.449079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:52.479102   77396 cri.go:89] found id: ""
	I0828 18:23:52.479129   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.479140   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:52.479148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:52.479213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:52.510025   77396 cri.go:89] found id: ""
	I0828 18:23:52.510051   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.510061   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:52.510068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:52.510171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:52.544472   77396 cri.go:89] found id: ""
	I0828 18:23:52.544501   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.544510   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:52.544517   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:52.544584   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:52.579962   77396 cri.go:89] found id: ""
	I0828 18:23:52.579986   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.579993   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:52.580000   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:52.580015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:52.631775   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:52.631809   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:52.645200   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:52.645230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:52.709318   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:52.709341   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:52.709355   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:52.788797   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:52.788834   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:52.402059   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.901750   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.075625   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.076533   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.743325   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.242726   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.324787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:55.338003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:55.338109   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:55.371733   77396 cri.go:89] found id: ""
	I0828 18:23:55.371757   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.371764   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:55.371770   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:55.371818   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:55.407922   77396 cri.go:89] found id: ""
	I0828 18:23:55.407944   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.407951   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:55.407957   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:55.408009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:55.443667   77396 cri.go:89] found id: ""
	I0828 18:23:55.443693   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.443700   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:55.443706   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:55.443761   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:55.478692   77396 cri.go:89] found id: ""
	I0828 18:23:55.478725   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.478735   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:55.478742   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:55.478804   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:55.512495   77396 cri.go:89] found id: ""
	I0828 18:23:55.512517   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.512525   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:55.512530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:55.512583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:55.546363   77396 cri.go:89] found id: ""
	I0828 18:23:55.546404   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.546415   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:55.546423   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:55.546478   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:55.579505   77396 cri.go:89] found id: ""
	I0828 18:23:55.579526   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.579533   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:55.579539   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:55.579588   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:55.610588   77396 cri.go:89] found id: ""
	I0828 18:23:55.610612   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.610628   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:55.610648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:55.610659   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:55.647289   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:55.647313   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:55.696660   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:55.696699   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:55.709215   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:55.709242   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:55.781755   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:55.781773   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:55.781786   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.359553   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:58.371960   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:58.372034   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:58.404455   77396 cri.go:89] found id: ""
	I0828 18:23:58.404481   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.404488   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:58.404494   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:58.404545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:58.436955   77396 cri.go:89] found id: ""
	I0828 18:23:58.436979   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.436989   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:58.436996   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:58.437055   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:58.467985   77396 cri.go:89] found id: ""
	I0828 18:23:58.468011   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.468021   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:58.468028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:58.468085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:58.500356   77396 cri.go:89] found id: ""
	I0828 18:23:58.500390   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.500398   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:58.500404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:58.500469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:58.538445   77396 cri.go:89] found id: ""
	I0828 18:23:58.538469   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.538477   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:58.538483   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:58.538541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:58.577827   77396 cri.go:89] found id: ""
	I0828 18:23:58.577851   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.577859   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:58.577867   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:58.577932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:58.611863   77396 cri.go:89] found id: ""
	I0828 18:23:58.611891   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.611902   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:58.611909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:58.611973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:58.646133   77396 cri.go:89] found id: ""
	I0828 18:23:58.646165   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.646175   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:58.646187   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:58.646204   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:58.659103   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:58.659134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:58.725271   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:58.725292   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:58.725310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.807171   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:58.807218   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:58.848245   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:58.848273   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:56.902329   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.902824   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:56.575727   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.576160   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.075851   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:00.243273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:02.247987   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.402171   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:01.415498   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:01.415574   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:01.449314   77396 cri.go:89] found id: ""
	I0828 18:24:01.449347   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.449355   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:01.449362   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:01.449425   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:01.485354   77396 cri.go:89] found id: ""
	I0828 18:24:01.485381   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.485388   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:01.485395   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:01.485439   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:01.518106   77396 cri.go:89] found id: ""
	I0828 18:24:01.518132   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.518139   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:01.518145   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:01.518191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:01.551298   77396 cri.go:89] found id: ""
	I0828 18:24:01.551329   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.551340   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:01.551348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:01.551406   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:01.587074   77396 cri.go:89] found id: ""
	I0828 18:24:01.587100   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.587107   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:01.587112   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:01.587158   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:01.619482   77396 cri.go:89] found id: ""
	I0828 18:24:01.619510   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.619518   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:01.619523   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:01.619575   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:01.651938   77396 cri.go:89] found id: ""
	I0828 18:24:01.651965   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.651972   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:01.651978   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:01.652039   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:01.685390   77396 cri.go:89] found id: ""
	I0828 18:24:01.685419   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.685429   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:01.685437   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:01.685448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.723631   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:01.723656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:01.777387   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:01.777422   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:01.793748   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:01.793781   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:01.857869   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:01.857901   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:01.857915   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.434883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:04.447876   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:04.447953   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:04.480730   77396 cri.go:89] found id: ""
	I0828 18:24:04.480762   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.480774   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:04.480781   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:04.480841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:04.514621   77396 cri.go:89] found id: ""
	I0828 18:24:04.514647   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.514657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:04.514664   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:04.514722   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:04.552044   77396 cri.go:89] found id: ""
	I0828 18:24:04.552071   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.552083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:04.552090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:04.552151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:04.587402   77396 cri.go:89] found id: ""
	I0828 18:24:04.587427   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.587440   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:04.587446   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:04.587506   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:04.619299   77396 cri.go:89] found id: ""
	I0828 18:24:04.619329   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.619337   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:04.619343   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:04.619393   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:04.659363   77396 cri.go:89] found id: ""
	I0828 18:24:04.659391   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.659399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:04.659408   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:04.659469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:04.691997   77396 cri.go:89] found id: ""
	I0828 18:24:04.692022   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.692030   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:04.692035   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:04.692089   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:04.725162   77396 cri.go:89] found id: ""
	I0828 18:24:04.725188   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.725196   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:04.725204   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:04.725215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:04.778072   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:04.778112   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:04.792571   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:04.792604   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:04.863074   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:04.863096   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:04.863107   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.958480   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:04.958516   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.401445   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.402916   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.575667   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:05.576444   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:04.744216   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.243680   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.498048   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:07.511286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:07.511350   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:07.554880   77396 cri.go:89] found id: ""
	I0828 18:24:07.554910   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.554921   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:07.554929   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:07.554990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:07.590593   77396 cri.go:89] found id: ""
	I0828 18:24:07.590621   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.590631   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:07.590641   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:07.590706   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:07.624067   77396 cri.go:89] found id: ""
	I0828 18:24:07.624096   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.624107   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:07.624113   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:07.624169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:07.657241   77396 cri.go:89] found id: ""
	I0828 18:24:07.657269   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.657277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:07.657282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:07.657341   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:07.702308   77396 cri.go:89] found id: ""
	I0828 18:24:07.702358   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.702368   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:07.702375   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:07.702438   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:07.736409   77396 cri.go:89] found id: ""
	I0828 18:24:07.736446   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.736454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:07.736459   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:07.736527   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:07.771001   77396 cri.go:89] found id: ""
	I0828 18:24:07.771029   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.771037   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:07.771043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:07.771090   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:07.807061   77396 cri.go:89] found id: ""
	I0828 18:24:07.807089   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.807099   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:07.807111   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:07.807125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:07.885254   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:07.885293   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:07.926920   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:07.926948   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:07.980485   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:07.980524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:07.994512   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:07.994545   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:08.071058   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:05.901817   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.902547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.402041   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.576656   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.077246   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:09.244155   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:11.743283   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.571233   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:10.586227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:10.586298   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:10.623971   77396 cri.go:89] found id: ""
	I0828 18:24:10.623997   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.624006   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:10.624014   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:10.624074   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:10.675472   77396 cri.go:89] found id: ""
	I0828 18:24:10.675506   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.675518   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:10.675526   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:10.675599   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:10.707885   77396 cri.go:89] found id: ""
	I0828 18:24:10.707913   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.707922   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:10.707931   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:10.707991   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:10.740896   77396 cri.go:89] found id: ""
	I0828 18:24:10.740924   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.740934   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:10.740942   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:10.741058   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:10.776125   77396 cri.go:89] found id: ""
	I0828 18:24:10.776155   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.776167   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:10.776174   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:10.776234   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:10.814024   77396 cri.go:89] found id: ""
	I0828 18:24:10.814053   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.814062   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:10.814068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:10.814132   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:10.851380   77396 cri.go:89] found id: ""
	I0828 18:24:10.851404   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.851412   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:10.851418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:10.851479   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:10.888162   77396 cri.go:89] found id: ""
	I0828 18:24:10.888193   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.888204   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:10.888215   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:10.888229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:10.938481   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:10.938520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:10.952841   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:10.952870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:11.020956   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:11.020982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:11.020997   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:11.101883   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:11.101920   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:13.642878   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:13.657098   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:13.657172   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:13.695651   77396 cri.go:89] found id: ""
	I0828 18:24:13.695686   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.695694   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:13.695699   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:13.695747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:13.732419   77396 cri.go:89] found id: ""
	I0828 18:24:13.732452   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.732465   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:13.732473   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:13.732523   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:13.770052   77396 cri.go:89] found id: ""
	I0828 18:24:13.770090   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.770099   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:13.770104   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:13.770157   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:13.807955   77396 cri.go:89] found id: ""
	I0828 18:24:13.807980   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.807988   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:13.807993   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:13.808045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:13.849535   77396 cri.go:89] found id: ""
	I0828 18:24:13.849559   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.849566   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:13.849571   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:13.849621   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:13.889078   77396 cri.go:89] found id: ""
	I0828 18:24:13.889105   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.889114   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:13.889122   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:13.889177   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:13.924998   77396 cri.go:89] found id: ""
	I0828 18:24:13.925030   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.925040   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:13.925046   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:13.925095   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:13.962794   77396 cri.go:89] found id: ""
	I0828 18:24:13.962824   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.962835   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:13.962843   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:13.962854   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:14.016213   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:14.016260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:14.030089   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:14.030119   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:14.101102   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:14.101121   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:14.101134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:14.179243   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:14.179283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:12.903671   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:15.401472   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:12.575572   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:14.575994   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:13.743881   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.243453   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.725412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:16.738387   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:16.738459   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:16.773934   77396 cri.go:89] found id: ""
	I0828 18:24:16.773960   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.773967   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:16.773973   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:16.774022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:16.807374   77396 cri.go:89] found id: ""
	I0828 18:24:16.807402   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.807412   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:16.807418   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:16.807468   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:16.841569   77396 cri.go:89] found id: ""
	I0828 18:24:16.841595   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.841605   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:16.841613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:16.841673   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:16.877225   77396 cri.go:89] found id: ""
	I0828 18:24:16.877247   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.877255   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:16.877261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:16.877321   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:16.911357   77396 cri.go:89] found id: ""
	I0828 18:24:16.911385   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.911395   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:16.911402   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:16.911458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:16.955061   77396 cri.go:89] found id: ""
	I0828 18:24:16.955087   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.955095   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:16.955103   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:16.955156   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:16.989851   77396 cri.go:89] found id: ""
	I0828 18:24:16.989887   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.989900   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:16.989906   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:16.989966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:17.023974   77396 cri.go:89] found id: ""
	I0828 18:24:17.024005   77396 logs.go:276] 0 containers: []
	W0828 18:24:17.024016   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:17.024024   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:17.024036   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:17.085245   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:17.085279   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:17.100181   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:17.100211   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:17.185406   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:17.185426   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:17.185437   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:17.266980   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:17.267020   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:19.808568   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:19.823365   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:19.823432   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:19.859428   77396 cri.go:89] found id: ""
	I0828 18:24:19.859451   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.859459   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:19.859464   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:19.859518   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:19.895152   77396 cri.go:89] found id: ""
	I0828 18:24:19.895176   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.895186   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:19.895202   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:19.895263   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:19.935775   77396 cri.go:89] found id: ""
	I0828 18:24:19.935806   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.935815   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:19.935828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:19.935893   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:19.969484   77396 cri.go:89] found id: ""
	I0828 18:24:19.969518   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.969528   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:19.969534   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:19.969615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:20.002893   77396 cri.go:89] found id: ""
	I0828 18:24:20.002935   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.002947   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:20.002955   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:20.003041   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:20.034641   77396 cri.go:89] found id: ""
	I0828 18:24:20.034668   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.034678   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:20.034686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:20.034750   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:20.064580   77396 cri.go:89] found id: ""
	I0828 18:24:20.064609   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.064620   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:20.064627   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:20.064710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:20.109306   77396 cri.go:89] found id: ""
	I0828 18:24:20.109348   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.109360   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:20.109371   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:20.109390   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:20.160179   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:20.160213   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:20.172953   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:20.172982   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:24:17.402222   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.402389   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:17.076219   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.575317   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:18.742920   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:21.243791   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:24:20.245855   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:20.245879   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:20.245894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:20.333372   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:20.333430   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:22.870985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:22.886333   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:22.886403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:22.923248   77396 cri.go:89] found id: ""
	I0828 18:24:22.923278   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.923290   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:22.923298   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:22.923362   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:22.961720   77396 cri.go:89] found id: ""
	I0828 18:24:22.961747   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.961758   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:22.961767   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:22.961826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:22.996416   77396 cri.go:89] found id: ""
	I0828 18:24:22.996451   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.996461   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:22.996469   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:22.996534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:23.031328   77396 cri.go:89] found id: ""
	I0828 18:24:23.031354   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.031365   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:23.031373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:23.031442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:23.062790   77396 cri.go:89] found id: ""
	I0828 18:24:23.062818   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.062828   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:23.062836   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:23.062900   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:23.095783   77396 cri.go:89] found id: ""
	I0828 18:24:23.095811   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.095822   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:23.095829   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:23.095887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:23.128950   77396 cri.go:89] found id: ""
	I0828 18:24:23.128976   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.128984   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:23.128989   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:23.129035   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:23.161040   77396 cri.go:89] found id: ""
	I0828 18:24:23.161070   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.161081   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:23.161093   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:23.161109   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:23.209200   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:23.209232   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:23.222326   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:23.222369   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:23.294157   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:23.294223   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:23.294235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:23.371364   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:23.371399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:21.902165   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.902593   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:22.075187   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:24.076034   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.743186   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.245507   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.248023   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:25.911853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:25.924909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:25.925042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:25.958257   77396 cri.go:89] found id: ""
	I0828 18:24:25.958286   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.958294   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:25.958300   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:25.958380   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:25.991284   77396 cri.go:89] found id: ""
	I0828 18:24:25.991312   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.991320   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:25.991325   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:25.991373   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:26.023932   77396 cri.go:89] found id: ""
	I0828 18:24:26.023963   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.023974   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:26.023981   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:26.024042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:26.055233   77396 cri.go:89] found id: ""
	I0828 18:24:26.055264   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.055274   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:26.055282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:26.055342   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:26.091307   77396 cri.go:89] found id: ""
	I0828 18:24:26.091334   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.091345   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:26.091353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:26.091403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:26.123887   77396 cri.go:89] found id: ""
	I0828 18:24:26.123919   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.123929   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:26.123943   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:26.124004   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:26.156028   77396 cri.go:89] found id: ""
	I0828 18:24:26.156055   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.156063   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:26.156068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:26.156129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:26.186952   77396 cri.go:89] found id: ""
	I0828 18:24:26.186981   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.186989   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:26.186998   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:26.187008   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:26.234021   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:26.234065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:26.249052   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:26.249079   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:26.323382   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:26.323406   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:26.323421   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:26.408279   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:26.408306   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:28.950242   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:28.964886   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:28.964973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:28.999657   77396 cri.go:89] found id: ""
	I0828 18:24:28.999686   77396 logs.go:276] 0 containers: []
	W0828 18:24:28.999695   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:28.999701   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:28.999759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:29.036649   77396 cri.go:89] found id: ""
	I0828 18:24:29.036682   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.036691   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:29.036697   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:29.036758   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:29.071048   77396 cri.go:89] found id: ""
	I0828 18:24:29.071073   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.071083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:29.071090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:29.071149   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:29.106377   77396 cri.go:89] found id: ""
	I0828 18:24:29.106412   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.106423   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:29.106430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:29.106494   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:29.141150   77396 cri.go:89] found id: ""
	I0828 18:24:29.141183   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.141192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:29.141198   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:29.141261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:29.175977   77396 cri.go:89] found id: ""
	I0828 18:24:29.176007   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.176015   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:29.176022   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:29.176085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:29.209684   77396 cri.go:89] found id: ""
	I0828 18:24:29.209714   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.209725   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:29.209732   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:29.209791   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:29.244105   77396 cri.go:89] found id: ""
	I0828 18:24:29.244133   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.244143   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:29.244153   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:29.244168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:29.304288   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:29.304326   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:29.319606   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:29.319636   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:29.389101   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:29.389123   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:29.389135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:29.474129   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:29.474168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:26.401494   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.402117   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.402503   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.574724   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.575806   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:31.075079   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.743295   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.743355   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.018867   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:32.032399   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:32.032467   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:32.066994   77396 cri.go:89] found id: ""
	I0828 18:24:32.067023   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.067032   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:32.067038   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:32.067094   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:32.102133   77396 cri.go:89] found id: ""
	I0828 18:24:32.102164   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.102176   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:32.102183   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:32.102237   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:32.136427   77396 cri.go:89] found id: ""
	I0828 18:24:32.136450   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.136457   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:32.136463   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:32.136514   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.169993   77396 cri.go:89] found id: ""
	I0828 18:24:32.170026   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.170034   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:32.170040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:32.170114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:32.202191   77396 cri.go:89] found id: ""
	I0828 18:24:32.202218   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.202229   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:32.202236   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:32.202297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:32.241866   77396 cri.go:89] found id: ""
	I0828 18:24:32.241890   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.241900   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:32.241908   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:32.241980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:32.275919   77396 cri.go:89] found id: ""
	I0828 18:24:32.275949   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.275965   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:32.275972   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:32.276033   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:32.310958   77396 cri.go:89] found id: ""
	I0828 18:24:32.310991   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.311002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:32.311010   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:32.311023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:32.367619   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:32.367665   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:32.380676   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:32.380707   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:32.445626   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:32.445650   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:32.445668   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:32.528458   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:32.528493   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:35.070182   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:35.084599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:35.084707   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:35.120542   77396 cri.go:89] found id: ""
	I0828 18:24:35.120568   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.120578   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:35.120585   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:35.120644   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:35.159336   77396 cri.go:89] found id: ""
	I0828 18:24:35.159361   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.159372   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:35.159378   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:35.159445   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:35.197161   77396 cri.go:89] found id: ""
	I0828 18:24:35.197185   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.197196   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:35.197203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:35.197267   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.903836   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.401184   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:33.574441   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.574602   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.244147   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.744307   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.233507   77396 cri.go:89] found id: ""
	I0828 18:24:35.233533   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.233542   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:35.233548   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:35.233609   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:35.270403   77396 cri.go:89] found id: ""
	I0828 18:24:35.270440   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.270448   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:35.270454   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:35.270503   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:35.304119   77396 cri.go:89] found id: ""
	I0828 18:24:35.304141   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.304149   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:35.304155   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:35.304223   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:35.341477   77396 cri.go:89] found id: ""
	I0828 18:24:35.341507   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.341518   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:35.341525   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:35.341589   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:35.374180   77396 cri.go:89] found id: ""
	I0828 18:24:35.374207   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.374215   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:35.374224   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:35.374235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:35.428008   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:35.428041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:35.443131   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:35.443159   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:35.515296   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:35.515318   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:35.515332   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:35.590734   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:35.590765   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.129856   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:38.143354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:38.143413   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:38.174964   77396 cri.go:89] found id: ""
	I0828 18:24:38.174993   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.175004   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:38.175011   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:38.175083   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:38.211424   77396 cri.go:89] found id: ""
	I0828 18:24:38.211460   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.211471   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:38.211477   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:38.211533   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:38.244667   77396 cri.go:89] found id: ""
	I0828 18:24:38.244697   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.244712   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:38.244719   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:38.244779   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:38.277930   77396 cri.go:89] found id: ""
	I0828 18:24:38.277955   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.277963   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:38.277969   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:38.278020   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:38.311374   77396 cri.go:89] found id: ""
	I0828 18:24:38.311403   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.311413   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:38.311420   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:38.311477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:38.345467   77396 cri.go:89] found id: ""
	I0828 18:24:38.345496   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.345507   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:38.345515   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:38.345576   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:38.377554   77396 cri.go:89] found id: ""
	I0828 18:24:38.377584   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.377595   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:38.377613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:38.377675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:38.410101   77396 cri.go:89] found id: ""
	I0828 18:24:38.410132   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.410142   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:38.410151   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:38.410165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:38.422496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:38.422523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:38.486692   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:38.486715   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:38.486728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:38.567295   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:38.567331   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.605787   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:38.605820   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:37.402128   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.902663   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.574935   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.575447   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:40.243971   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.743768   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:41.159454   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:41.172776   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:41.172845   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:41.205430   77396 cri.go:89] found id: ""
	I0828 18:24:41.205459   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.205470   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:41.205477   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:41.205541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:41.238941   77396 cri.go:89] found id: ""
	I0828 18:24:41.238968   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.238978   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:41.238985   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:41.239047   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:41.276056   77396 cri.go:89] found id: ""
	I0828 18:24:41.276079   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.276086   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:41.276092   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:41.276140   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:41.309018   77396 cri.go:89] found id: ""
	I0828 18:24:41.309043   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.309051   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:41.309057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:41.309103   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:41.343279   77396 cri.go:89] found id: ""
	I0828 18:24:41.343301   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.343309   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:41.343314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:41.343360   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:41.376723   77396 cri.go:89] found id: ""
	I0828 18:24:41.376749   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.376756   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:41.376762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:41.376811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:41.411996   77396 cri.go:89] found id: ""
	I0828 18:24:41.412023   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.412034   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:41.412040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:41.412091   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:41.445988   77396 cri.go:89] found id: ""
	I0828 18:24:41.446016   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.446026   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:41.446037   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:41.446053   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:41.498760   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:41.498799   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:41.512383   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:41.512413   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:41.582469   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:41.582493   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:41.582506   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:41.658801   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:41.658836   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.195154   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:44.207904   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:44.207978   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:44.241620   77396 cri.go:89] found id: ""
	I0828 18:24:44.241649   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.241659   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:44.241667   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:44.241726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:44.277206   77396 cri.go:89] found id: ""
	I0828 18:24:44.277238   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.277248   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:44.277254   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:44.277313   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:44.314367   77396 cri.go:89] found id: ""
	I0828 18:24:44.314397   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.314407   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:44.314415   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:44.314473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:44.356384   77396 cri.go:89] found id: ""
	I0828 18:24:44.356417   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.356429   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:44.356436   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:44.356499   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:44.388781   77396 cri.go:89] found id: ""
	I0828 18:24:44.388804   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.388812   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:44.388818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:44.388864   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:44.422896   77396 cri.go:89] found id: ""
	I0828 18:24:44.422927   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.422939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:44.422946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:44.423000   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:44.457218   77396 cri.go:89] found id: ""
	I0828 18:24:44.457242   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.457250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:44.457256   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:44.457315   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:44.489819   77396 cri.go:89] found id: ""
	I0828 18:24:44.489846   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.489854   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:44.489874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:44.489886   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.526759   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:44.526789   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:44.578813   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:44.578844   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:44.592066   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:44.592105   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:44.655504   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:44.655528   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:44.655547   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:42.401964   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.901869   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.076081   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.576010   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:45.242907   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.244400   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.240915   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:47.253259   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:47.253324   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:47.287911   77396 cri.go:89] found id: ""
	I0828 18:24:47.287939   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.287950   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:47.287958   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:47.288017   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:47.319834   77396 cri.go:89] found id: ""
	I0828 18:24:47.319863   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.319871   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:47.319877   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:47.319947   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:47.356339   77396 cri.go:89] found id: ""
	I0828 18:24:47.356370   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.356395   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:47.356403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:47.356481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:47.388621   77396 cri.go:89] found id: ""
	I0828 18:24:47.388646   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.388656   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:47.388663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:47.388713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:47.422495   77396 cri.go:89] found id: ""
	I0828 18:24:47.422527   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.422537   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:47.422545   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:47.422614   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:47.458799   77396 cri.go:89] found id: ""
	I0828 18:24:47.458825   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.458833   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:47.458839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:47.458885   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:47.496184   77396 cri.go:89] found id: ""
	I0828 18:24:47.496215   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.496226   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:47.496233   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:47.496286   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:47.536283   77396 cri.go:89] found id: ""
	I0828 18:24:47.536311   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.536322   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:47.536333   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:47.536347   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:47.588024   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:47.588056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:47.600661   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:47.600727   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:47.669096   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:47.669124   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:47.669139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:47.753696   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:47.753725   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:46.902404   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.402357   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:46.576078   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.075244   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.744421   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:52.243878   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:50.293600   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:50.306623   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:50.306715   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:50.340416   77396 cri.go:89] found id: ""
	I0828 18:24:50.340448   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.340460   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:50.340468   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:50.340534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:50.375812   77396 cri.go:89] found id: ""
	I0828 18:24:50.375843   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.375854   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:50.375861   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:50.375924   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:50.414399   77396 cri.go:89] found id: ""
	I0828 18:24:50.414426   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.414435   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:50.414444   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:50.414512   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:50.451285   77396 cri.go:89] found id: ""
	I0828 18:24:50.451316   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.451328   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:50.451336   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:50.451404   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:50.487828   77396 cri.go:89] found id: ""
	I0828 18:24:50.487852   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.487863   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:50.487871   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:50.487929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:50.520989   77396 cri.go:89] found id: ""
	I0828 18:24:50.521015   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.521023   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:50.521028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:50.521086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:50.553231   77396 cri.go:89] found id: ""
	I0828 18:24:50.553262   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.553271   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:50.553277   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:50.553332   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:50.588612   77396 cri.go:89] found id: ""
	I0828 18:24:50.588644   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.588654   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:50.588663   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:50.588674   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:50.642018   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:50.642065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:50.655887   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:50.655918   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:50.721935   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:50.721964   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:50.721980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:50.802009   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:50.802049   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:53.344650   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:53.357952   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:53.358011   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:53.393369   77396 cri.go:89] found id: ""
	I0828 18:24:53.393399   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.393408   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:53.393413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:53.393475   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:53.425918   77396 cri.go:89] found id: ""
	I0828 18:24:53.425947   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.425958   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:53.425965   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:53.426018   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:53.461827   77396 cri.go:89] found id: ""
	I0828 18:24:53.461857   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.461867   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:53.461874   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:53.461966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:53.494323   77396 cri.go:89] found id: ""
	I0828 18:24:53.494353   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.494363   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:53.494370   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:53.494430   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:53.531687   77396 cri.go:89] found id: ""
	I0828 18:24:53.531715   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.531726   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:53.531733   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:53.531789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:53.565794   77396 cri.go:89] found id: ""
	I0828 18:24:53.565819   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.565829   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:53.565838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:53.565894   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:53.601666   77396 cri.go:89] found id: ""
	I0828 18:24:53.601699   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.601710   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:53.601717   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:53.601782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:53.641268   77396 cri.go:89] found id: ""
	I0828 18:24:53.641302   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.641315   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:53.641332   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:53.641363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:53.695496   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:53.695532   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:53.708691   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:53.708722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:53.779280   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:53.779307   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:53.779320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:53.859258   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:53.859295   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:51.402746   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.403126   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:51.575165   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.575930   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:55.576188   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:54.243984   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.743976   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.403005   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:56.416305   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:56.416376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:56.448916   77396 cri.go:89] found id: ""
	I0828 18:24:56.448944   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.448955   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:56.448962   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:56.449022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:56.483870   77396 cri.go:89] found id: ""
	I0828 18:24:56.483897   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.483905   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:56.483910   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:56.483970   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:56.516615   77396 cri.go:89] found id: ""
	I0828 18:24:56.516642   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.516649   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:56.516655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:56.516712   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:56.551561   77396 cri.go:89] found id: ""
	I0828 18:24:56.551584   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.551591   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:56.551599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:56.551668   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:56.586089   77396 cri.go:89] found id: ""
	I0828 18:24:56.586120   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.586130   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:56.586138   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:56.586197   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:56.617988   77396 cri.go:89] found id: ""
	I0828 18:24:56.618018   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.618028   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:56.618034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:56.618111   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:56.664493   77396 cri.go:89] found id: ""
	I0828 18:24:56.664526   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.664535   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:56.664540   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:56.664601   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:56.698191   77396 cri.go:89] found id: ""
	I0828 18:24:56.698217   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.698228   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:56.698237   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:56.698251   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:56.747197   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:56.747225   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:56.760236   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:56.760262   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:56.831931   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:56.831955   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:56.831969   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:56.908578   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:56.908621   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:59.450148   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:59.464476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:59.464548   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:59.500934   77396 cri.go:89] found id: ""
	I0828 18:24:59.500956   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.500965   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:59.500970   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:59.501019   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:59.532711   77396 cri.go:89] found id: ""
	I0828 18:24:59.532740   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.532747   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:59.532753   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:59.532802   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:59.564974   77396 cri.go:89] found id: ""
	I0828 18:24:59.565001   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.565009   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:59.565016   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:59.565073   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:59.597924   77396 cri.go:89] found id: ""
	I0828 18:24:59.597957   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.597967   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:59.597975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:59.598030   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:59.630179   77396 cri.go:89] found id: ""
	I0828 18:24:59.630207   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.630216   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:59.630222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:59.630279   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:59.664755   77396 cri.go:89] found id: ""
	I0828 18:24:59.664783   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.664793   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:59.664800   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:59.664860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:59.701556   77396 cri.go:89] found id: ""
	I0828 18:24:59.701581   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.701590   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:59.701596   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:59.701646   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:59.733387   77396 cri.go:89] found id: ""
	I0828 18:24:59.733422   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.733430   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:59.733439   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:59.733450   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:59.780962   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:59.780994   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:59.795998   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:59.796034   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:59.864864   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:59.864886   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:59.864902   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:59.941914   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:59.941957   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:55.901611   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:57.902218   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.902364   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:58.076387   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:00.575268   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.243885   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:01.742980   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.480133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:02.492804   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:02.492863   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:02.525573   77396 cri.go:89] found id: ""
	I0828 18:25:02.525600   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.525609   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:02.525614   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:02.525675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:02.558640   77396 cri.go:89] found id: ""
	I0828 18:25:02.558670   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.558680   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:02.558687   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:02.558746   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:02.598803   77396 cri.go:89] found id: ""
	I0828 18:25:02.598838   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.598851   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:02.598860   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:02.598931   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:02.634067   77396 cri.go:89] found id: ""
	I0828 18:25:02.634110   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.634121   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:02.634128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:02.634188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:02.671495   77396 cri.go:89] found id: ""
	I0828 18:25:02.671520   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.671529   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:02.671536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:02.671595   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:02.704478   77396 cri.go:89] found id: ""
	I0828 18:25:02.704510   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.704522   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:02.704530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:02.704591   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:02.736799   77396 cri.go:89] found id: ""
	I0828 18:25:02.736831   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.736840   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:02.736846   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:02.736905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:02.770820   77396 cri.go:89] found id: ""
	I0828 18:25:02.770846   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.770856   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:02.770866   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:02.770885   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:02.848618   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:02.848645   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:02.848662   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:02.924704   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:02.924738   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:02.960776   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:02.960811   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:03.011600   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:03.011645   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:02.402547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:04.903615   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.576294   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.075828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:03.743629   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.744476   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:08.243316   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.527662   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:05.540652   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:05.540737   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:05.574620   77396 cri.go:89] found id: ""
	I0828 18:25:05.574650   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.574660   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:05.574668   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:05.574729   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:05.607594   77396 cri.go:89] found id: ""
	I0828 18:25:05.607621   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.607629   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:05.607634   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:05.607691   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:05.650792   77396 cri.go:89] found id: ""
	I0828 18:25:05.650823   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.650833   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:05.650841   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:05.650909   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:05.684453   77396 cri.go:89] found id: ""
	I0828 18:25:05.684481   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.684492   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:05.684499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:05.684564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:05.717875   77396 cri.go:89] found id: ""
	I0828 18:25:05.717904   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.717914   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:05.717921   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:05.717980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:05.754114   77396 cri.go:89] found id: ""
	I0828 18:25:05.754143   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.754155   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:05.754163   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:05.754220   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:05.786354   77396 cri.go:89] found id: ""
	I0828 18:25:05.786399   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.786411   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:05.786418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:05.786473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:05.818108   77396 cri.go:89] found id: ""
	I0828 18:25:05.818134   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.818141   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:05.818149   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:05.818164   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:05.868731   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:05.868762   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:05.882333   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:05.882360   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:05.951978   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:05.952003   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:05.952015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:06.028537   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:06.028573   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:08.567011   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:08.580607   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:08.580675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:08.613821   77396 cri.go:89] found id: ""
	I0828 18:25:08.613847   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.613858   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:08.613865   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:08.613929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:08.648994   77396 cri.go:89] found id: ""
	I0828 18:25:08.649021   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.649030   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:08.649036   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:08.649084   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:08.680804   77396 cri.go:89] found id: ""
	I0828 18:25:08.680829   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.680837   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:08.680844   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:08.680903   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:08.717926   77396 cri.go:89] found id: ""
	I0828 18:25:08.717962   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.717973   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:08.717980   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:08.718043   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:08.751928   77396 cri.go:89] found id: ""
	I0828 18:25:08.751957   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.751967   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:08.751975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:08.752037   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:08.791400   77396 cri.go:89] found id: ""
	I0828 18:25:08.791423   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.791432   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:08.791437   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:08.791497   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:08.828072   77396 cri.go:89] found id: ""
	I0828 18:25:08.828106   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.828118   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:08.828125   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:08.828190   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:08.881175   77396 cri.go:89] found id: ""
	I0828 18:25:08.881204   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.881216   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:08.881226   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:08.881241   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:08.970432   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:08.970469   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:09.006975   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:09.007002   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:09.059881   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:09.059919   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:09.073543   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:09.073567   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:09.143468   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:07.403012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.901414   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:07.075904   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.077674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:10.244567   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:12.742811   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.644356   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:11.657229   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:11.657297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:11.695036   77396 cri.go:89] found id: ""
	I0828 18:25:11.695059   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.695067   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:11.695073   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:11.695123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:11.726524   77396 cri.go:89] found id: ""
	I0828 18:25:11.726548   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.726556   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:11.726561   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:11.726608   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:11.759249   77396 cri.go:89] found id: ""
	I0828 18:25:11.759278   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.759289   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:11.759296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:11.759356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:11.794109   77396 cri.go:89] found id: ""
	I0828 18:25:11.794154   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.794163   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:11.794169   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:11.794221   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:11.828378   77396 cri.go:89] found id: ""
	I0828 18:25:11.828403   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.828411   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:11.828416   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:11.828470   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:11.864009   77396 cri.go:89] found id: ""
	I0828 18:25:11.864035   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.864043   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:11.864049   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:11.864108   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:11.895844   77396 cri.go:89] found id: ""
	I0828 18:25:11.895870   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.895878   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:11.895883   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:11.895932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:11.932149   77396 cri.go:89] found id: ""
	I0828 18:25:11.932180   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.932190   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:11.932208   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:11.932222   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:11.982478   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:11.982514   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:11.995466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:11.995498   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:12.058507   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:12.058531   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:12.058546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:12.138225   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:12.138260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:14.675970   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:14.688744   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:14.688811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:14.720771   77396 cri.go:89] found id: ""
	I0828 18:25:14.720795   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.720803   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:14.720808   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:14.720855   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:14.754047   77396 cri.go:89] found id: ""
	I0828 18:25:14.754071   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.754095   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:14.754103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:14.754159   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:14.789214   77396 cri.go:89] found id: ""
	I0828 18:25:14.789244   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.789256   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:14.789263   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:14.789331   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:14.822366   77396 cri.go:89] found id: ""
	I0828 18:25:14.822399   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.822411   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:14.822419   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:14.822489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:14.855905   77396 cri.go:89] found id: ""
	I0828 18:25:14.855932   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.855942   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:14.855949   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:14.856007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:14.889492   77396 cri.go:89] found id: ""
	I0828 18:25:14.889519   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.889529   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:14.889536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:14.889594   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:14.923892   77396 cri.go:89] found id: ""
	I0828 18:25:14.923921   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.923932   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:14.923940   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:14.923998   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:14.954979   77396 cri.go:89] found id: ""
	I0828 18:25:14.955002   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.955009   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:14.955017   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:14.955029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:15.006233   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:15.006266   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:15.019702   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:15.019729   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:15.090916   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:15.090943   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:15.090959   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:15.166150   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:15.166190   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:11.902996   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.402539   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.574819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:13.575405   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:16.074386   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.743486   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.243491   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.703473   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:17.716353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:17.716440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:17.750334   77396 cri.go:89] found id: ""
	I0828 18:25:17.750367   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.750376   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:17.750382   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:17.750440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:17.783429   77396 cri.go:89] found id: ""
	I0828 18:25:17.783475   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.783488   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:17.783496   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:17.783561   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:17.819014   77396 cri.go:89] found id: ""
	I0828 18:25:17.819041   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.819052   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:17.819060   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:17.819118   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:17.856138   77396 cri.go:89] found id: ""
	I0828 18:25:17.856168   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.856179   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:17.856186   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:17.856248   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:17.891579   77396 cri.go:89] found id: ""
	I0828 18:25:17.891611   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.891619   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:17.891626   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:17.891687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:17.924709   77396 cri.go:89] found id: ""
	I0828 18:25:17.924771   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.924798   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:17.924808   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:17.924874   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:17.955875   77396 cri.go:89] found id: ""
	I0828 18:25:17.955903   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.955913   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:17.955920   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:17.955977   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:17.993827   77396 cri.go:89] found id: ""
	I0828 18:25:17.993861   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.993872   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:17.993882   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:17.993897   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:18.046501   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:18.046534   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:18.060008   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:18.060040   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:18.128546   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:18.128567   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:18.128582   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:18.204859   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:18.204896   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:16.901986   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.902594   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.076564   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.575785   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:19.243545   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:21.244384   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.745360   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:20.759428   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:20.759511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:20.794748   77396 cri.go:89] found id: ""
	I0828 18:25:20.794780   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.794789   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:20.794794   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:20.794843   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:20.834595   77396 cri.go:89] found id: ""
	I0828 18:25:20.834623   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.834636   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:20.834642   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:20.834720   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:20.870609   77396 cri.go:89] found id: ""
	I0828 18:25:20.870636   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.870646   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:20.870653   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:20.870710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:20.903739   77396 cri.go:89] found id: ""
	I0828 18:25:20.903764   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.903774   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:20.903782   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:20.903841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:20.937331   77396 cri.go:89] found id: ""
	I0828 18:25:20.937360   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.937367   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:20.937373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:20.937424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:20.971140   77396 cri.go:89] found id: ""
	I0828 18:25:20.971169   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.971178   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:20.971184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:20.971231   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:21.002714   77396 cri.go:89] found id: ""
	I0828 18:25:21.002743   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.002753   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:21.002761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:21.002833   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:21.034802   77396 cri.go:89] found id: ""
	I0828 18:25:21.034827   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.034837   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:21.034848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:21.034862   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:21.091088   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:21.091128   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:21.103535   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:21.103569   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:21.177175   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:21.177202   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:21.177217   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:21.257125   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:21.257161   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:23.797074   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:23.810097   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:23.810171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:23.843943   77396 cri.go:89] found id: ""
	I0828 18:25:23.843972   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.843984   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:23.843991   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:23.844054   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:23.879872   77396 cri.go:89] found id: ""
	I0828 18:25:23.879906   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.879918   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:23.879926   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:23.879985   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:23.914109   77396 cri.go:89] found id: ""
	I0828 18:25:23.914136   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.914145   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:23.914153   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:23.914200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:23.952672   77396 cri.go:89] found id: ""
	I0828 18:25:23.952700   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.952708   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:23.952714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:23.952759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:23.986813   77396 cri.go:89] found id: ""
	I0828 18:25:23.986839   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.986855   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:23.986861   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:23.986917   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:24.019358   77396 cri.go:89] found id: ""
	I0828 18:25:24.019387   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.019396   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:24.019413   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:24.019487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:24.053389   77396 cri.go:89] found id: ""
	I0828 18:25:24.053415   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.053423   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:24.053429   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:24.053477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:24.086618   77396 cri.go:89] found id: ""
	I0828 18:25:24.086652   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.086660   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:24.086667   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:24.086677   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:24.136243   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:24.136277   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:24.150031   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:24.150071   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:24.229689   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:24.229729   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:24.229746   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:24.307152   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:24.307197   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:20.902691   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.401748   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:22.575828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.075159   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.743296   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.743656   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.243947   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:26.844828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:26.858915   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:26.858989   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:26.896094   77396 cri.go:89] found id: ""
	I0828 18:25:26.896123   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.896132   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:26.896138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:26.896187   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:26.934896   77396 cri.go:89] found id: ""
	I0828 18:25:26.934925   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.934936   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:26.934944   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:26.935007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:26.967673   77396 cri.go:89] found id: ""
	I0828 18:25:26.967700   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.967708   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:26.967714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:26.967780   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:27.000095   77396 cri.go:89] found id: ""
	I0828 18:25:27.000124   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.000133   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:27.000140   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:27.000192   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:27.038158   77396 cri.go:89] found id: ""
	I0828 18:25:27.038186   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.038195   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:27.038201   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:27.038253   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:27.073606   77396 cri.go:89] found id: ""
	I0828 18:25:27.073634   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.073649   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:27.073657   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:27.073713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:27.105139   77396 cri.go:89] found id: ""
	I0828 18:25:27.105163   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.105176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:27.105182   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:27.105235   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:27.137985   77396 cri.go:89] found id: ""
	I0828 18:25:27.138014   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.138025   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:27.138036   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:27.138055   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:27.187983   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:27.188018   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:27.200260   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:27.200286   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:27.273005   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:27.273026   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:27.273038   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:27.353333   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:27.353375   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:29.890515   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:29.903924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:29.903994   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:29.936189   77396 cri.go:89] found id: ""
	I0828 18:25:29.936221   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.936231   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:29.936240   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:29.936354   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:29.968319   77396 cri.go:89] found id: ""
	I0828 18:25:29.968349   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.968359   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:29.968366   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:29.968436   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:30.001331   77396 cri.go:89] found id: ""
	I0828 18:25:30.001358   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.001383   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:30.001391   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:30.001477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:30.035610   77396 cri.go:89] found id: ""
	I0828 18:25:30.035634   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.035642   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:30.035648   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:30.035695   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:30.067304   77396 cri.go:89] found id: ""
	I0828 18:25:30.067335   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.067346   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:30.067354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:30.067429   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:30.105020   77396 cri.go:89] found id: ""
	I0828 18:25:30.105049   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.105057   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:30.105063   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:30.105126   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:30.142048   77396 cri.go:89] found id: ""
	I0828 18:25:30.142097   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.142110   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:30.142117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:30.142180   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:30.173099   77396 cri.go:89] found id: ""
	I0828 18:25:30.173131   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.173140   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:30.173149   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:30.173166   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:25:25.901875   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.401339   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.402248   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:27.076181   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:29.575216   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.743526   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:33.242940   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:25:30.238946   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:30.238968   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:30.238980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:30.320484   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:30.320523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:30.360028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:30.360056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:30.412663   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:30.412697   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:32.927100   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:32.940555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:32.940636   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:32.973182   77396 cri.go:89] found id: ""
	I0828 18:25:32.973221   77396 logs.go:276] 0 containers: []
	W0828 18:25:32.973233   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:32.973242   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:32.973303   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:33.006096   77396 cri.go:89] found id: ""
	I0828 18:25:33.006125   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.006134   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:33.006139   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:33.006191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:33.038430   77396 cri.go:89] found id: ""
	I0828 18:25:33.038461   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.038472   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:33.038480   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:33.038542   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:33.070266   77396 cri.go:89] found id: ""
	I0828 18:25:33.070294   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.070303   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:33.070315   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:33.070375   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:33.105248   77396 cri.go:89] found id: ""
	I0828 18:25:33.105278   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.105289   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:33.105296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:33.105356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:33.136507   77396 cri.go:89] found id: ""
	I0828 18:25:33.136540   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.136551   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:33.136559   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:33.136618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:33.167333   77396 cri.go:89] found id: ""
	I0828 18:25:33.167359   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.167370   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:33.167377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:33.167442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:33.201302   77396 cri.go:89] found id: ""
	I0828 18:25:33.201331   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.201343   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:33.201352   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:33.201364   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:33.213335   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:33.213361   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:33.278269   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:33.278296   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:33.278310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:33.357015   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:33.357048   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:33.401463   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:33.401495   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:32.402583   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.402749   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:32.075671   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.575951   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.743215   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.243081   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.952911   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:35.965925   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:35.965990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:36.001656   77396 cri.go:89] found id: ""
	I0828 18:25:36.001693   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.001705   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:36.001713   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:36.001784   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:36.035010   77396 cri.go:89] found id: ""
	I0828 18:25:36.035037   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.035045   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:36.035050   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:36.035099   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:36.069113   77396 cri.go:89] found id: ""
	I0828 18:25:36.069148   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.069158   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:36.069164   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:36.069219   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:36.106200   77396 cri.go:89] found id: ""
	I0828 18:25:36.106230   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.106240   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:36.106248   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:36.106316   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:36.138428   77396 cri.go:89] found id: ""
	I0828 18:25:36.138457   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.138468   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:36.138475   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:36.138559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:36.170084   77396 cri.go:89] found id: ""
	I0828 18:25:36.170112   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.170122   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:36.170128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:36.170188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:36.202180   77396 cri.go:89] found id: ""
	I0828 18:25:36.202205   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.202215   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:36.202222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:36.202285   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:36.236125   77396 cri.go:89] found id: ""
	I0828 18:25:36.236156   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.236167   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:36.236179   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:36.236193   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:36.274230   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:36.274256   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:36.325505   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:36.325546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:36.338714   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:36.338741   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:36.406404   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:36.406432   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:36.406448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:38.981942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:38.995287   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:38.995357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:39.028250   77396 cri.go:89] found id: ""
	I0828 18:25:39.028275   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.028282   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:39.028289   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:39.028335   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:39.061402   77396 cri.go:89] found id: ""
	I0828 18:25:39.061434   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.061444   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:39.061449   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:39.061501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:39.095672   77396 cri.go:89] found id: ""
	I0828 18:25:39.095704   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.095716   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:39.095729   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:39.095789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:39.130135   77396 cri.go:89] found id: ""
	I0828 18:25:39.130162   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.130170   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:39.130176   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:39.130239   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:39.168529   77396 cri.go:89] found id: ""
	I0828 18:25:39.168560   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.168571   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:39.168578   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:39.168641   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:39.200786   77396 cri.go:89] found id: ""
	I0828 18:25:39.200813   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.200821   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:39.200828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:39.200876   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:39.232855   77396 cri.go:89] found id: ""
	I0828 18:25:39.232886   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.232894   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:39.232902   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:39.232966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:39.267241   77396 cri.go:89] found id: ""
	I0828 18:25:39.267273   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.267284   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:39.267294   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:39.267309   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:39.306023   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:39.306061   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:39.357880   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:39.357931   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:39.370886   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:39.370914   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:39.448130   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:39.448151   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:39.448163   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:36.403245   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.902238   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:37.075570   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:39.076792   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:40.243633   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.244395   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.027111   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:42.039611   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:42.039687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:42.078052   77396 cri.go:89] found id: ""
	I0828 18:25:42.078093   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.078104   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:42.078111   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:42.078169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:42.112812   77396 cri.go:89] found id: ""
	I0828 18:25:42.112842   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.112851   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:42.112856   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:42.112902   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:42.146846   77396 cri.go:89] found id: ""
	I0828 18:25:42.146875   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.146884   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:42.146891   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:42.146948   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:42.179311   77396 cri.go:89] found id: ""
	I0828 18:25:42.179344   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.179352   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:42.179358   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:42.179422   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:42.212149   77396 cri.go:89] found id: ""
	I0828 18:25:42.212179   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.212192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:42.212200   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:42.212254   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:42.248322   77396 cri.go:89] found id: ""
	I0828 18:25:42.248358   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.248369   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:42.248382   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:42.248496   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:42.283212   77396 cri.go:89] found id: ""
	I0828 18:25:42.283241   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.283250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:42.283257   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:42.283318   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:42.327064   77396 cri.go:89] found id: ""
	I0828 18:25:42.327099   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.327110   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:42.327121   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:42.327135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:42.378545   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:42.378577   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:42.392020   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:42.392045   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:42.464531   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:42.464553   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:42.464564   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:42.543116   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:42.543162   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:45.083935   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:45.096434   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:45.096501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:45.130059   77396 cri.go:89] found id: ""
	I0828 18:25:45.130098   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.130110   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:45.130117   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:45.130176   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:45.160982   77396 cri.go:89] found id: ""
	I0828 18:25:45.161011   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.161021   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:45.161028   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:45.161086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:45.191416   77396 cri.go:89] found id: ""
	I0828 18:25:45.191449   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.191460   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:45.191467   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:45.191524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:41.401456   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:43.401666   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.401772   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:41.575819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.075020   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.743053   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:47.242714   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.223315   77396 cri.go:89] found id: ""
	I0828 18:25:45.223344   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.223360   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:45.223368   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:45.223421   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:45.255404   77396 cri.go:89] found id: ""
	I0828 18:25:45.255428   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.255435   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:45.255441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:45.255487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:45.294671   77396 cri.go:89] found id: ""
	I0828 18:25:45.294705   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.294716   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:45.294724   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:45.294811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:45.329148   77396 cri.go:89] found id: ""
	I0828 18:25:45.329174   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.329186   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:45.329191   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:45.329249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:45.361976   77396 cri.go:89] found id: ""
	I0828 18:25:45.362007   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.362018   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:45.362028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:45.362041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:45.412495   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:45.412530   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:45.425268   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:45.425302   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:45.493451   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:45.493475   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:45.493489   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:45.571427   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:45.571472   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.108133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:48.120632   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:48.120699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:48.156933   77396 cri.go:89] found id: ""
	I0828 18:25:48.156963   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.156973   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:48.156981   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:48.157045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:48.188436   77396 cri.go:89] found id: ""
	I0828 18:25:48.188465   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.188473   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:48.188479   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:48.188524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:48.219558   77396 cri.go:89] found id: ""
	I0828 18:25:48.219588   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.219598   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:48.219605   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:48.219661   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:48.252872   77396 cri.go:89] found id: ""
	I0828 18:25:48.252901   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.252917   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:48.252923   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:48.252975   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:48.288244   77396 cri.go:89] found id: ""
	I0828 18:25:48.288273   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.288283   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:48.288291   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:48.288355   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:48.325077   77396 cri.go:89] found id: ""
	I0828 18:25:48.325114   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.325126   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:48.325134   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:48.325195   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:48.358163   77396 cri.go:89] found id: ""
	I0828 18:25:48.358191   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.358202   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:48.358210   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:48.358259   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:48.409246   77396 cri.go:89] found id: ""
	I0828 18:25:48.409277   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.409287   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:48.409299   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:48.409314   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:48.425228   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:48.425259   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:48.493169   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:48.493188   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:48.493201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:48.573486   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:48.573524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.615846   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:48.615879   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:47.901530   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.901707   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:46.574662   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:48.575614   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.075530   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.244444   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.744518   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.165546   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:51.178743   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:51.178807   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:51.214299   77396 cri.go:89] found id: ""
	I0828 18:25:51.214329   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.214340   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:51.214349   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:51.214426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:51.247057   77396 cri.go:89] found id: ""
	I0828 18:25:51.247086   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.247096   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:51.247103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:51.247174   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:51.279381   77396 cri.go:89] found id: ""
	I0828 18:25:51.279413   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.279423   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:51.279430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:51.279492   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:51.314237   77396 cri.go:89] found id: ""
	I0828 18:25:51.314266   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.314277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:51.314286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:51.314352   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:51.347496   77396 cri.go:89] found id: ""
	I0828 18:25:51.347518   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.347526   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:51.347532   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:51.347578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:51.381705   77396 cri.go:89] found id: ""
	I0828 18:25:51.381742   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.381753   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:51.381762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:51.381816   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:51.413157   77396 cri.go:89] found id: ""
	I0828 18:25:51.413186   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.413196   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:51.413203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:51.413261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:51.443228   77396 cri.go:89] found id: ""
	I0828 18:25:51.443251   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.443266   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:51.443274   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:51.443287   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:51.490927   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:51.490961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:51.505308   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:51.505334   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:51.572077   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:51.572109   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:51.572125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:51.658398   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:51.658441   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:54.199638   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:54.213449   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:54.213525   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:54.249698   77396 cri.go:89] found id: ""
	I0828 18:25:54.249720   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.249727   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:54.249733   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:54.249782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:54.285235   77396 cri.go:89] found id: ""
	I0828 18:25:54.285267   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.285279   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:54.285287   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:54.285344   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:54.322535   77396 cri.go:89] found id: ""
	I0828 18:25:54.322562   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.322571   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:54.322577   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:54.322640   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:54.357995   77396 cri.go:89] found id: ""
	I0828 18:25:54.358025   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.358036   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:54.358045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:54.358129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:54.391112   77396 cri.go:89] found id: ""
	I0828 18:25:54.391137   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.391145   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:54.391150   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:54.391213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:54.424248   77396 cri.go:89] found id: ""
	I0828 18:25:54.424278   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.424288   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:54.424295   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:54.424357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:54.456529   77396 cri.go:89] found id: ""
	I0828 18:25:54.456553   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.456561   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:54.456566   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:54.456619   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:54.489226   77396 cri.go:89] found id: ""
	I0828 18:25:54.489251   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.489259   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:54.489268   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:54.489283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:54.544282   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:54.544318   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:54.557511   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:54.557549   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:54.631057   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:54.631081   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:54.631096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:54.711874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:54.711910   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:51.902237   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.402216   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:53.076058   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:55.577768   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.244062   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:56.244857   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:57.251826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:57.264806   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:57.264872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:57.300005   77396 cri.go:89] found id: ""
	I0828 18:25:57.300031   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.300041   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:57.300049   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:57.300128   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:57.333070   77396 cri.go:89] found id: ""
	I0828 18:25:57.333099   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.333110   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:57.333117   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:57.333181   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:57.369343   77396 cri.go:89] found id: ""
	I0828 18:25:57.369372   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.369390   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:57.369398   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:57.369462   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:57.401729   77396 cri.go:89] found id: ""
	I0828 18:25:57.401756   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.401764   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:57.401770   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:57.401824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:57.432890   77396 cri.go:89] found id: ""
	I0828 18:25:57.432914   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.432921   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:57.432927   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:57.432973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:57.467572   77396 cri.go:89] found id: ""
	I0828 18:25:57.467596   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.467604   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:57.467609   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:57.467663   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:57.500316   77396 cri.go:89] found id: ""
	I0828 18:25:57.500344   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.500351   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:57.500357   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:57.500411   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:57.531676   77396 cri.go:89] found id: ""
	I0828 18:25:57.531700   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.531708   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:57.531716   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:57.531728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:57.604613   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:57.604639   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:57.604653   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:57.684622   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:57.684658   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:57.720566   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:57.720656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:57.770832   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:57.770866   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:56.902012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:59.402189   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.075045   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.575328   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.743586   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.743675   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:01.737703   76435 pod_ready.go:82] duration metric: took 4m0.000480749s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:01.737748   76435 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0828 18:26:01.737772   76435 pod_ready.go:39] duration metric: took 4m13.763880094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:01.737804   76435 kubeadm.go:597] duration metric: took 4m22.607627094s to restartPrimaryControlPlane
	W0828 18:26:01.737875   76435 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:01.737908   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:00.283493   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:00.296500   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:00.296578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:00.334395   77396 cri.go:89] found id: ""
	I0828 18:26:00.334420   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.334428   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:00.334434   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:00.334481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:00.369178   77396 cri.go:89] found id: ""
	I0828 18:26:00.369205   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.369214   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:00.369219   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:00.369283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:00.405962   77396 cri.go:89] found id: ""
	I0828 18:26:00.405990   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.406000   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:00.406007   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:00.406064   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:00.438684   77396 cri.go:89] found id: ""
	I0828 18:26:00.438717   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.438728   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:00.438735   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:00.438795   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:00.472357   77396 cri.go:89] found id: ""
	I0828 18:26:00.472385   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.472397   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:00.472403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:00.472450   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:00.506891   77396 cri.go:89] found id: ""
	I0828 18:26:00.506920   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.506931   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:00.506938   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:00.506999   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:00.546387   77396 cri.go:89] found id: ""
	I0828 18:26:00.546413   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.546422   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:00.546427   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:00.546474   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:00.598714   77396 cri.go:89] found id: ""
	I0828 18:26:00.598745   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.598753   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:00.598761   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:00.598779   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:00.617100   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:00.617130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:00.687317   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:00.687348   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:00.687363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:00.770097   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:00.770130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:00.815848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:00.815883   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:03.365469   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:03.379117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:03.379182   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:03.414122   77396 cri.go:89] found id: ""
	I0828 18:26:03.414148   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.414155   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:03.414161   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:03.414208   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:03.446953   77396 cri.go:89] found id: ""
	I0828 18:26:03.446975   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.446983   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:03.446988   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:03.447036   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:03.481034   77396 cri.go:89] found id: ""
	I0828 18:26:03.481059   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.481067   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:03.481072   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:03.481120   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:03.514785   77396 cri.go:89] found id: ""
	I0828 18:26:03.514814   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.514824   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:03.514832   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:03.514888   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:03.548302   77396 cri.go:89] found id: ""
	I0828 18:26:03.548330   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.548340   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:03.548348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:03.548423   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:03.582430   77396 cri.go:89] found id: ""
	I0828 18:26:03.582460   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.582469   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:03.582476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:03.582529   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:03.615108   77396 cri.go:89] found id: ""
	I0828 18:26:03.615136   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.615144   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:03.615149   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:03.615205   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:03.647282   77396 cri.go:89] found id: ""
	I0828 18:26:03.647312   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.647321   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:03.647330   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:03.647340   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:03.660466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:03.660500   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:03.732746   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:03.732767   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:03.732780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:03.811286   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:03.811320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:03.848482   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:03.848513   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:01.402393   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.402670   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.403016   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.075650   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.574825   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:06.400122   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:06.412839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:06.412908   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:06.448570   77396 cri.go:89] found id: ""
	I0828 18:26:06.448597   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.448608   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:06.448620   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:06.448687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:06.482446   77396 cri.go:89] found id: ""
	I0828 18:26:06.482476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.482487   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:06.482495   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:06.482555   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:06.514640   77396 cri.go:89] found id: ""
	I0828 18:26:06.514669   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.514679   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:06.514686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:06.514747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:06.548997   77396 cri.go:89] found id: ""
	I0828 18:26:06.549020   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.549028   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:06.549034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:06.549079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:06.583557   77396 cri.go:89] found id: ""
	I0828 18:26:06.583582   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.583589   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:06.583595   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:06.583665   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:06.617447   77396 cri.go:89] found id: ""
	I0828 18:26:06.617476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.617484   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:06.617490   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:06.617549   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:06.650387   77396 cri.go:89] found id: ""
	I0828 18:26:06.650419   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.650427   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:06.650433   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:06.650489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:06.682851   77396 cri.go:89] found id: ""
	I0828 18:26:06.682879   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.682888   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:06.682899   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:06.682961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:06.695365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:06.695392   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:06.760214   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:06.760245   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:06.760261   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:06.839827   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:06.839863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:06.877298   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:06.877325   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.430694   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:09.443043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:09.443115   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:09.476557   77396 cri.go:89] found id: ""
	I0828 18:26:09.476583   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.476594   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:09.476602   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:09.476659   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:09.514909   77396 cri.go:89] found id: ""
	I0828 18:26:09.514935   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.514943   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:09.514948   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:09.515009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:09.549769   77396 cri.go:89] found id: ""
	I0828 18:26:09.549800   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.549810   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:09.549818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:09.549868   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:09.582793   77396 cri.go:89] found id: ""
	I0828 18:26:09.582821   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.582831   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:09.582838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:09.582896   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:09.615603   77396 cri.go:89] found id: ""
	I0828 18:26:09.615636   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.615648   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:09.615655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:09.615716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:09.650046   77396 cri.go:89] found id: ""
	I0828 18:26:09.650087   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.650100   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:09.650108   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:09.650161   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:09.681726   77396 cri.go:89] found id: ""
	I0828 18:26:09.681754   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.681763   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:09.681768   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:09.681821   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:09.713008   77396 cri.go:89] found id: ""
	I0828 18:26:09.713036   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.713045   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:09.713054   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:09.713065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:09.792720   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:09.792757   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:09.831752   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:09.831785   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.880877   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:09.880913   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:09.896178   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:09.896215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:09.962282   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:07.901074   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:09.905185   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:08.074185   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:10.075331   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.462957   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:12.475266   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:12.475345   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:12.508364   77396 cri.go:89] found id: ""
	I0828 18:26:12.508394   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.508405   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:12.508413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:12.508472   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:12.544152   77396 cri.go:89] found id: ""
	I0828 18:26:12.544185   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.544197   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:12.544204   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:12.544264   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:12.578358   77396 cri.go:89] found id: ""
	I0828 18:26:12.578384   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.578394   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:12.578403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:12.578456   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:12.609183   77396 cri.go:89] found id: ""
	I0828 18:26:12.609206   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.609214   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:12.609219   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:12.609292   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:12.641791   77396 cri.go:89] found id: ""
	I0828 18:26:12.641816   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.641824   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:12.641830   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:12.641887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:12.673857   77396 cri.go:89] found id: ""
	I0828 18:26:12.673881   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.673889   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:12.673894   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:12.673938   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:12.709501   77396 cri.go:89] found id: ""
	I0828 18:26:12.709525   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.709532   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:12.709538   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:12.709585   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:12.742972   77396 cri.go:89] found id: ""
	I0828 18:26:12.742994   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.743002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:12.743010   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:12.743026   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:12.813949   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:12.813969   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:12.813980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:12.894829   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:12.894873   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:12.939533   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:12.939565   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:12.990319   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:12.990358   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:12.404061   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:14.902346   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.575908   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.075489   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.503923   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:15.518161   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:15.518240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:15.564145   77396 cri.go:89] found id: ""
	I0828 18:26:15.564173   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.564182   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:15.564189   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:15.564249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:15.600654   77396 cri.go:89] found id: ""
	I0828 18:26:15.600682   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.600692   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:15.600699   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:15.600760   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:15.633089   77396 cri.go:89] found id: ""
	I0828 18:26:15.633122   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.633131   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:15.633137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:15.633186   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:15.667339   77396 cri.go:89] found id: ""
	I0828 18:26:15.667370   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.667382   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:15.667389   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:15.667451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:15.699463   77396 cri.go:89] found id: ""
	I0828 18:26:15.699499   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.699508   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:15.699513   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:15.699573   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:15.735841   77396 cri.go:89] found id: ""
	I0828 18:26:15.735866   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.735873   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:15.735879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:15.735929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:15.771111   77396 cri.go:89] found id: ""
	I0828 18:26:15.771135   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.771142   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:15.771148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:15.771198   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:15.804845   77396 cri.go:89] found id: ""
	I0828 18:26:15.804868   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.804875   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:15.804884   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:15.804894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:15.856744   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:15.856780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:15.869496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:15.869520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:15.938957   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:15.938982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:15.938998   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:16.016482   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:16.016525   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:18.554851   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:18.568241   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.568317   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.601401   77396 cri.go:89] found id: ""
	I0828 18:26:18.601439   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.601448   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:18.601454   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.601511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.634784   77396 cri.go:89] found id: ""
	I0828 18:26:18.634809   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.634816   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:18.634822   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.634875   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:18.666540   77396 cri.go:89] found id: ""
	I0828 18:26:18.666572   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.666584   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:18.666591   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:18.666643   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:18.699180   77396 cri.go:89] found id: ""
	I0828 18:26:18.699210   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.699221   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:18.699228   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:18.699289   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:18.735001   77396 cri.go:89] found id: ""
	I0828 18:26:18.735032   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.735042   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:18.735050   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:18.735116   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:18.767404   77396 cri.go:89] found id: ""
	I0828 18:26:18.767441   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.767454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:18.767472   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:18.767537   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:18.798857   77396 cri.go:89] found id: ""
	I0828 18:26:18.798881   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.798890   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:18.798896   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:18.798942   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:18.830113   77396 cri.go:89] found id: ""
	I0828 18:26:18.830137   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.830145   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:18.830153   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:18.830165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:18.843161   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:18.843188   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:18.910736   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:18.910760   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:18.910775   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:18.991698   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:18.991734   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.038896   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.038929   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:17.402193   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:18.902692   76486 pod_ready.go:82] duration metric: took 4m0.007006782s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:18.902716   76486 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:26:18.902724   76486 pod_ready.go:39] duration metric: took 4m4.058254547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:18.902739   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:18.902762   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.902819   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.954071   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:18.954115   76486 cri.go:89] found id: ""
	I0828 18:26:18.954123   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:18.954183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.958270   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.958345   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.994068   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:18.994105   76486 cri.go:89] found id: ""
	I0828 18:26:18.994116   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:18.994173   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.998807   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.998881   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:19.050622   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:19.050649   76486 cri.go:89] found id: ""
	I0828 18:26:19.050657   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:19.050738   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.055283   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:19.055340   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:19.093254   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.093280   76486 cri.go:89] found id: ""
	I0828 18:26:19.093288   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:19.093341   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.097062   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:19.097118   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:19.135962   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.135989   76486 cri.go:89] found id: ""
	I0828 18:26:19.135999   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:19.136046   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.140440   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:19.140510   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:19.176913   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.176942   76486 cri.go:89] found id: ""
	I0828 18:26:19.176951   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:19.177007   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.180742   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:19.180794   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:19.218796   76486 cri.go:89] found id: ""
	I0828 18:26:19.218821   76486 logs.go:276] 0 containers: []
	W0828 18:26:19.218832   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:19.218839   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:19.218898   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:19.253110   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:19.253134   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.253140   76486 cri.go:89] found id: ""
	I0828 18:26:19.253148   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:19.253205   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.257338   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.261148   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:19.261173   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.299620   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:19.299659   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.337533   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:19.337560   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:19.836298   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:19.836350   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.881132   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:19.881168   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.921986   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:19.922023   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.975419   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.975455   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:20.045848   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:20.045895   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:20.059683   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:20.059715   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:20.186442   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:20.186472   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:20.233152   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:20.233187   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:20.278546   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:20.278575   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:20.325985   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:20.326015   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:17.075945   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:19.076890   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:21.590663   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:21.602796   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:21.602860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:21.635583   77396 cri.go:89] found id: ""
	I0828 18:26:21.635612   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.635623   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:21.635631   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:21.635699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:21.666982   77396 cri.go:89] found id: ""
	I0828 18:26:21.667023   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.667034   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:21.667041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:21.667098   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:21.698817   77396 cri.go:89] found id: ""
	I0828 18:26:21.698851   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.698862   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:21.698870   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:21.698925   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:21.729618   77396 cri.go:89] found id: ""
	I0828 18:26:21.729645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.729654   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:21.729660   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:21.729718   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:21.763188   77396 cri.go:89] found id: ""
	I0828 18:26:21.763214   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.763222   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:21.763227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:21.763272   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:21.795613   77396 cri.go:89] found id: ""
	I0828 18:26:21.795645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.795656   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:21.795663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:21.795716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:21.828271   77396 cri.go:89] found id: ""
	I0828 18:26:21.828299   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.828308   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:21.828314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:21.828358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:21.860098   77396 cri.go:89] found id: ""
	I0828 18:26:21.860124   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.860132   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:21.860141   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:21.860155   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:21.908269   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:21.908308   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:21.921123   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:21.921149   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:21.985059   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:21.985078   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:21.985091   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:22.065705   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:22.065745   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:24.608061   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:24.621768   77396 kubeadm.go:597] duration metric: took 4m4.233964466s to restartPrimaryControlPlane
	W0828 18:26:24.621838   77396 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:24.621863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:22.860616   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:22.877760   76486 api_server.go:72] duration metric: took 4m15.760769788s to wait for apiserver process to appear ...
	I0828 18:26:22.877790   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:22.877829   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:22.877891   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:22.924739   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:22.924763   76486 cri.go:89] found id: ""
	I0828 18:26:22.924772   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:22.924831   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.928747   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:22.928810   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:22.967171   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:22.967193   76486 cri.go:89] found id: ""
	I0828 18:26:22.967200   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:22.967247   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.970989   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:22.971048   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:23.004804   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.004830   76486 cri.go:89] found id: ""
	I0828 18:26:23.004839   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:23.004895   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.008551   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:23.008616   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:23.041475   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.041496   76486 cri.go:89] found id: ""
	I0828 18:26:23.041504   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:23.041562   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.045265   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:23.045321   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:23.078749   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.078772   76486 cri.go:89] found id: ""
	I0828 18:26:23.078781   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:23.078827   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.082647   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:23.082712   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:23.117104   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.117128   76486 cri.go:89] found id: ""
	I0828 18:26:23.117138   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:23.117196   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.121011   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:23.121066   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:23.154564   76486 cri.go:89] found id: ""
	I0828 18:26:23.154592   76486 logs.go:276] 0 containers: []
	W0828 18:26:23.154614   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:23.154626   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:23.154689   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:23.192082   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.192101   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.192106   76486 cri.go:89] found id: ""
	I0828 18:26:23.192114   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:23.192175   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.196183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.199786   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:23.199814   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:23.241986   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:23.242019   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.276718   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:23.276750   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:23.353187   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:23.353224   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:23.366901   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:23.366937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.403147   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:23.403181   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.440461   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:23.440491   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.476039   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:23.476067   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.524702   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:23.524743   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.558484   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:23.558510   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:23.994897   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:23.994933   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:24.091558   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:24.091591   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:24.133767   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:24.133801   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:21.575113   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:23.576760   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:26.075770   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:27.939212   76435 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.201267084s)
	I0828 18:26:27.939337   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:27.964796   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:27.978456   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:27.988580   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:27.988599   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:27.988640   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.008900   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.008955   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.020342   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.032723   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.032784   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.049205   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.058740   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.058803   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.067969   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.078089   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.078145   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.086950   76435 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.136931   76435 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 18:26:28.137117   76435 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:26:28.249761   76435 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:26:28.249900   76435 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:26:28.250020   76435 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 18:26:28.258994   76435 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:26:28.261527   76435 out.go:235]   - Generating certificates and keys ...
	I0828 18:26:28.261644   76435 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:26:28.261732   76435 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:26:28.261848   76435 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:26:28.261939   76435 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:26:28.262038   76435 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:26:28.262155   76435 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:26:28.262254   76435 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:26:28.262338   76435 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:26:28.262452   76435 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:26:28.262557   76435 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:26:28.262635   76435 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:26:28.262731   76435 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:26:28.434898   76435 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:26:28.833039   76435 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 18:26:28.930840   76435 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:26:29.103123   76435 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:26:29.201561   76435 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:26:29.202039   76435 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:26:29.204545   76435 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:26:28.691092   77396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.069202982s)
	I0828 18:26:28.691158   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:28.705352   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:28.715421   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:28.724698   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:28.724718   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:28.724771   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.733594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.733676   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.742759   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.752127   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.752187   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.761279   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.770451   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.770518   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.779635   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.788337   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.788405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.797794   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.997476   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:26.682052   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:26:26.687081   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:26:26.687992   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:26.688008   76486 api_server.go:131] duration metric: took 3.810212378s to wait for apiserver health ...
	I0828 18:26:26.688016   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:26.688038   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:26.688084   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:26.729049   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:26.729072   76486 cri.go:89] found id: ""
	I0828 18:26:26.729080   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:26.729127   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.733643   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:26.733710   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:26.774655   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:26.774675   76486 cri.go:89] found id: ""
	I0828 18:26:26.774682   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:26.774732   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.778654   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:26.778704   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:26.812844   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:26.812870   76486 cri.go:89] found id: ""
	I0828 18:26:26.812878   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:26.812928   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.816783   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:26.816847   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:26.856925   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:26.856945   76486 cri.go:89] found id: ""
	I0828 18:26:26.856957   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:26.857013   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.860845   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:26.860906   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:26.893850   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:26.893873   76486 cri.go:89] found id: ""
	I0828 18:26:26.893882   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:26.893940   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.897799   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:26.897875   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:26.932914   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:26.932936   76486 cri.go:89] found id: ""
	I0828 18:26:26.932942   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:26.932993   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.937185   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:26.937253   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:26.980339   76486 cri.go:89] found id: ""
	I0828 18:26:26.980368   76486 logs.go:276] 0 containers: []
	W0828 18:26:26.980379   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:26.980386   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:26.980458   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:27.014870   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.014889   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.014893   76486 cri.go:89] found id: ""
	I0828 18:26:27.014899   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:27.014954   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.018782   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.022146   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:27.022167   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:27.062244   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:27.062271   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:27.097495   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:27.097528   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:27.150300   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:27.150342   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.183651   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:27.183680   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.217641   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:27.217666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:27.286627   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:27.286666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:27.300486   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:27.300514   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:27.409150   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:27.409183   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:27.791378   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:27.791425   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:27.842764   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:27.842799   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:27.892361   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:27.892393   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:27.926469   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:27.926497   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:30.478530   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:26:30.478568   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.478576   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.478583   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.478589   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.478595   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.478608   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.478619   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.478627   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.478637   76486 system_pods.go:74] duration metric: took 3.79061533s to wait for pod list to return data ...
	I0828 18:26:30.478648   76486 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:30.482479   76486 default_sa.go:45] found service account: "default"
	I0828 18:26:30.482507   76486 default_sa.go:55] duration metric: took 3.852493ms for default service account to be created ...
	I0828 18:26:30.482517   76486 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:30.488974   76486 system_pods.go:86] 8 kube-system pods found
	I0828 18:26:30.489014   76486 system_pods.go:89] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.489023   76486 system_pods.go:89] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.489030   76486 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.489038   76486 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.489044   76486 system_pods.go:89] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.489050   76486 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.489062   76486 system_pods.go:89] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.489069   76486 system_pods.go:89] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.489092   76486 system_pods.go:126] duration metric: took 6.568035ms to wait for k8s-apps to be running ...
	I0828 18:26:30.489104   76486 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:30.489163   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:30.508336   76486 system_svc.go:56] duration metric: took 19.222473ms WaitForService to wait for kubelet
	I0828 18:26:30.508369   76486 kubeadm.go:582] duration metric: took 4m23.39138334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:30.508394   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:30.512219   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:30.512253   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:30.512267   76486 node_conditions.go:105] duration metric: took 3.866556ms to run NodePressure ...
	I0828 18:26:30.512282   76486 start.go:241] waiting for startup goroutines ...
	I0828 18:26:30.512291   76486 start.go:246] waiting for cluster config update ...
	I0828 18:26:30.512306   76486 start.go:255] writing updated cluster config ...
	I0828 18:26:30.512681   76486 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:30.579402   76486 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:30.581444   76486 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-640552" cluster and "default" namespace by default
	I0828 18:26:28.575075   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:30.576207   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:29.206147   76435 out.go:235]   - Booting up control plane ...
	I0828 18:26:29.206257   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:26:29.206365   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:26:29.206494   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:26:29.227031   76435 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:26:29.235149   76435 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:26:29.235246   76435 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:26:29.370272   76435 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 18:26:29.370462   76435 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 18:26:29.872896   76435 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733105ms
	I0828 18:26:29.872975   76435 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 18:26:34.877604   76435 kubeadm.go:310] [api-check] The API server is healthy after 5.002276684s
	I0828 18:26:34.892462   76435 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 18:26:34.905804   76435 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 18:26:34.932862   76435 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 18:26:34.933079   76435 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-014980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 18:26:34.944560   76435 kubeadm.go:310] [bootstrap-token] Using token: nwgkdo.9yj47woyyi233z66
	I0828 18:26:34.945933   76435 out.go:235]   - Configuring RBAC rules ...
	I0828 18:26:34.946052   76435 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 18:26:34.951430   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 18:26:34.963862   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 18:26:34.968038   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 18:26:34.971350   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 18:26:34.977521   76435 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 18:26:35.282249   76435 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 18:26:35.704101   76435 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 18:26:36.282971   76435 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 18:26:36.284216   76435 kubeadm.go:310] 
	I0828 18:26:36.284337   76435 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 18:26:36.284364   76435 kubeadm.go:310] 
	I0828 18:26:36.284457   76435 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 18:26:36.284470   76435 kubeadm.go:310] 
	I0828 18:26:36.284504   76435 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 18:26:36.284579   76435 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 18:26:36.284654   76435 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 18:26:36.284667   76435 kubeadm.go:310] 
	I0828 18:26:36.284748   76435 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 18:26:36.284758   76435 kubeadm.go:310] 
	I0828 18:26:36.284820   76435 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 18:26:36.284826   76435 kubeadm.go:310] 
	I0828 18:26:36.284891   76435 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 18:26:36.284988   76435 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 18:26:36.285081   76435 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 18:26:36.285091   76435 kubeadm.go:310] 
	I0828 18:26:36.285197   76435 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 18:26:36.285298   76435 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 18:26:36.285309   76435 kubeadm.go:310] 
	I0828 18:26:36.285414   76435 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285549   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 18:26:36.285572   76435 kubeadm.go:310] 	--control-plane 
	I0828 18:26:36.285577   76435 kubeadm.go:310] 
	I0828 18:26:36.285655   76435 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 18:26:36.285663   76435 kubeadm.go:310] 
	I0828 18:26:36.285757   76435 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285886   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 18:26:36.287195   76435 kubeadm.go:310] W0828 18:26:28.113155    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287529   76435 kubeadm.go:310] W0828 18:26:28.114038    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287633   76435 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:36.287659   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:26:36.287669   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:26:36.289019   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:26:33.075886   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:35.076651   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:36.290213   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:26:36.302171   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:26:36.326384   76435 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:26:36.326452   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:36.326522   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-014980 minikube.k8s.io/updated_at=2024_08_28T18_26_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=embed-certs-014980 minikube.k8s.io/primary=true
	I0828 18:26:36.537331   76435 ops.go:34] apiserver oom_adj: -16
	I0828 18:26:36.537497   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.038467   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.537529   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.038147   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.537854   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.038193   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.538325   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.037978   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.537503   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.038001   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.160327   76435 kubeadm.go:1113] duration metric: took 4.83392727s to wait for elevateKubeSystemPrivileges
	I0828 18:26:41.160366   76435 kubeadm.go:394] duration metric: took 5m2.080700509s to StartCluster
	I0828 18:26:41.160386   76435 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.160469   76435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:26:41.162122   76435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.162393   76435 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:26:41.162463   76435 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:26:41.162547   76435 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-014980"
	I0828 18:26:41.162563   76435 addons.go:69] Setting default-storageclass=true in profile "embed-certs-014980"
	I0828 18:26:41.162588   76435 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-014980"
	I0828 18:26:41.162586   76435 addons.go:69] Setting metrics-server=true in profile "embed-certs-014980"
	W0828 18:26:41.162599   76435 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:26:41.162610   76435 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-014980"
	I0828 18:26:41.162632   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162635   76435 addons.go:234] Setting addon metrics-server=true in "embed-certs-014980"
	W0828 18:26:41.162644   76435 addons.go:243] addon metrics-server should already be in state true
	I0828 18:26:41.162666   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162612   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:26:41.163042   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163054   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163083   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163095   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163140   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163160   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.164216   76435 out.go:177] * Verifying Kubernetes components...
	I0828 18:26:41.166298   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:26:41.178807   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0828 18:26:41.178914   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0828 18:26:41.179437   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179515   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179971   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.179994   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180168   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.180197   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180346   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180629   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180982   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181021   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.181761   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181810   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.182920   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
	I0828 18:26:41.183394   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.183877   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.183900   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.184252   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.184450   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.187788   76435 addons.go:234] Setting addon default-storageclass=true in "embed-certs-014980"
	W0828 18:26:41.187811   76435 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:26:41.187837   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.188210   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.188242   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.199469   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I0828 18:26:41.199977   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.200461   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.200487   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.200894   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.201121   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.201369   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0828 18:26:41.201749   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.202224   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.202243   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.202811   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.203024   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.203030   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.205127   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.205217   76435 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:26:41.206606   76435 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.206620   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:26:41.206633   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.206678   76435 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:26:37.575308   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:39.575726   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:41.207928   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:26:41.207951   76435 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:26:41.207971   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.208651   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0828 18:26:41.209208   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.210020   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.210040   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.210477   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.210537   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211056   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211089   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211123   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211166   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211313   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.211443   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.211572   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211588   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211580   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.211600   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.211636   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.211828   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211996   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.212159   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.212271   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.228122   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
	I0828 18:26:41.228552   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.229000   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.229016   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.229309   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.229565   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.231484   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.231721   76435 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.231732   76435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:26:41.231744   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.234525   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.234901   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.234933   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.235097   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.235259   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.235412   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.235585   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.375620   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:26:41.420534   76435 node_ready.go:35] waiting up to 6m0s for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429069   76435 node_ready.go:49] node "embed-certs-014980" has status "Ready":"True"
	I0828 18:26:41.429090   76435 node_ready.go:38] duration metric: took 8.530462ms for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429098   76435 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:41.438842   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:41.484936   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.535672   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.536914   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:26:41.536936   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:26:41.604181   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:26:41.604219   76435 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:26:41.654668   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.654695   76435 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:26:41.688039   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.921155   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921188   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921465   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:41.921544   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.921568   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921577   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921842   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921863   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.938676   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.938694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.938984   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.939034   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690412   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154689373s)
	I0828 18:26:42.690461   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690469   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.690766   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.690810   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690830   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690843   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.691076   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.691114   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.691122   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.722795   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.034719218s)
	I0828 18:26:42.722840   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.722852   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723141   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.723210   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723231   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723249   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.723261   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723539   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723567   76435 addons.go:475] Verifying addon metrics-server=true in "embed-certs-014980"
	I0828 18:26:42.725524   76435 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0828 18:26:42.726507   76435 addons.go:510] duration metric: took 1.564045136s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0828 18:26:41.576259   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:44.075008   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:46.075323   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:43.445262   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:45.445672   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:47.948313   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:48.446506   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.446527   76435 pod_ready.go:82] duration metric: took 7.007660638s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.446538   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451954   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.451973   76435 pod_ready.go:82] duration metric: took 5.430099ms for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451983   76435 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456910   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.456937   76435 pod_ready.go:82] duration metric: took 4.947692ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456948   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963231   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.963252   76435 pod_ready.go:82] duration metric: took 1.506296167s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963262   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967762   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.967780   76435 pod_ready.go:82] duration metric: took 4.511839ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967788   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043820   76435 pod_ready.go:93] pod "kube-proxy-hzw4m" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.043844   76435 pod_ready.go:82] duration metric: took 76.049661ms for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043855   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443261   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.443288   76435 pod_ready.go:82] duration metric: took 399.423823ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443298   76435 pod_ready.go:39] duration metric: took 9.014190636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:50.443315   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:50.443375   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:50.459400   76435 api_server.go:72] duration metric: took 9.296966752s to wait for apiserver process to appear ...
	I0828 18:26:50.459426   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:50.459448   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:26:50.463861   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:26:50.464779   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:50.464807   76435 api_server.go:131] duration metric: took 5.370633ms to wait for apiserver health ...
	I0828 18:26:50.464817   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:50.645588   76435 system_pods.go:59] 9 kube-system pods found
	I0828 18:26:50.645620   76435 system_pods.go:61] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:50.645626   76435 system_pods.go:61] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:50.645629   76435 system_pods.go:61] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:50.645633   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:50.645636   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:50.645639   76435 system_pods.go:61] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:50.645642   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:50.645647   76435 system_pods.go:61] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:50.645651   76435 system_pods.go:61] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:50.645658   76435 system_pods.go:74] duration metric: took 180.831741ms to wait for pod list to return data ...
	I0828 18:26:50.645664   76435 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:50.844171   76435 default_sa.go:45] found service account: "default"
	I0828 18:26:50.844205   76435 default_sa.go:55] duration metric: took 198.534118ms for default service account to be created ...
	I0828 18:26:50.844217   76435 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:51.045810   76435 system_pods.go:86] 9 kube-system pods found
	I0828 18:26:51.045839   76435 system_pods.go:89] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:51.045844   76435 system_pods.go:89] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:51.045848   76435 system_pods.go:89] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:51.045852   76435 system_pods.go:89] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:51.045856   76435 system_pods.go:89] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:51.045859   76435 system_pods.go:89] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:51.045865   76435 system_pods.go:89] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:51.045871   76435 system_pods.go:89] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:51.045874   76435 system_pods.go:89] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:51.045882   76435 system_pods.go:126] duration metric: took 201.659747ms to wait for k8s-apps to be running ...
	I0828 18:26:51.045889   76435 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:51.045930   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:51.060123   76435 system_svc.go:56] duration metric: took 14.22252ms WaitForService to wait for kubelet
	I0828 18:26:51.060159   76435 kubeadm.go:582] duration metric: took 9.897729666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:51.060184   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:51.244017   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:51.244042   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:51.244052   76435 node_conditions.go:105] duration metric: took 183.862561ms to run NodePressure ...
	I0828 18:26:51.244063   76435 start.go:241] waiting for startup goroutines ...
	I0828 18:26:51.244069   76435 start.go:246] waiting for cluster config update ...
	I0828 18:26:51.244080   76435 start.go:255] writing updated cluster config ...
	I0828 18:26:51.244398   76435 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:51.291241   76435 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:51.293227   76435 out.go:177] * Done! kubectl is now configured to use "embed-certs-014980" cluster and "default" namespace by default
	I0828 18:26:48.075513   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:50.576810   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:53.075100   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:55.075381   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:57.076055   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:59.575251   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:01.575306   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:04.075576   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.076392   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.575514   75908 pod_ready.go:82] duration metric: took 4m0.006537109s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:27:06.575539   75908 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:27:06.575549   75908 pod_ready.go:39] duration metric: took 4m3.208242253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:27:06.575566   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:27:06.575596   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:06.575649   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:06.625222   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:06.625247   75908 cri.go:89] found id: ""
	I0828 18:27:06.625257   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:06.625317   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.629941   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:06.630003   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:06.665372   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:06.665400   75908 cri.go:89] found id: ""
	I0828 18:27:06.665410   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:06.665472   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.669511   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:06.669599   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:06.709706   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:06.709734   75908 cri.go:89] found id: ""
	I0828 18:27:06.709742   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:06.709801   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.713964   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:06.714023   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:06.748110   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:06.748136   75908 cri.go:89] found id: ""
	I0828 18:27:06.748158   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:06.748217   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.752020   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:06.752087   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:06.788455   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:06.788476   75908 cri.go:89] found id: ""
	I0828 18:27:06.788483   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:06.788537   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.792710   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:06.792779   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:06.830031   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:06.830055   75908 cri.go:89] found id: ""
	I0828 18:27:06.830065   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:06.830147   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.833910   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:06.833970   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:06.869172   75908 cri.go:89] found id: ""
	I0828 18:27:06.869199   75908 logs.go:276] 0 containers: []
	W0828 18:27:06.869210   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:06.869217   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:06.869281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:06.906605   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:06.906626   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:06.906632   75908 cri.go:89] found id: ""
	I0828 18:27:06.906644   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:06.906705   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.911374   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.915494   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:06.915515   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:06.961094   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:06.961128   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:07.018511   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:07.018543   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:07.058413   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:07.058443   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:07.098028   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:07.098055   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:07.136706   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:07.136731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:07.203021   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:07.203059   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:07.239714   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:07.239758   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:07.746282   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:07.746326   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:07.812731   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:07.812771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:07.828453   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:07.828484   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:07.967513   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:07.967610   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:08.013719   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:08.013745   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.553418   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:27:10.569945   75908 api_server.go:72] duration metric: took 4m14.476728398s to wait for apiserver process to appear ...
	I0828 18:27:10.569977   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:27:10.570010   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:10.570057   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:10.605869   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:10.605899   75908 cri.go:89] found id: ""
	I0828 18:27:10.605908   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:10.606013   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.609868   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:10.609949   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:10.647627   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:10.647655   75908 cri.go:89] found id: ""
	I0828 18:27:10.647664   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:10.647721   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.651916   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:10.651980   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:10.690782   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:10.690805   75908 cri.go:89] found id: ""
	I0828 18:27:10.690815   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:10.690870   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.694896   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:10.694944   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:10.735502   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:10.735530   75908 cri.go:89] found id: ""
	I0828 18:27:10.735541   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:10.735603   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.739627   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:10.739702   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:10.776213   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:10.776233   75908 cri.go:89] found id: ""
	I0828 18:27:10.776240   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:10.776293   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.779889   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:10.779948   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:10.815919   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:10.815949   75908 cri.go:89] found id: ""
	I0828 18:27:10.815958   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:10.816022   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.820317   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:10.820385   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:10.859049   75908 cri.go:89] found id: ""
	I0828 18:27:10.859077   75908 logs.go:276] 0 containers: []
	W0828 18:27:10.859085   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:10.859091   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:10.859138   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:10.894511   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.894543   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.894549   75908 cri.go:89] found id: ""
	I0828 18:27:10.894558   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:10.894616   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.899725   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.907315   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:10.907339   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.941374   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:10.941401   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:11.372069   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:11.372111   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:11.425168   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:11.425192   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:11.439748   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:11.439771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:11.484252   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:11.484278   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:11.522975   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:11.523000   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:11.590753   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:11.590797   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:11.629694   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:11.629725   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:11.667597   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:11.667627   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:11.732423   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:11.732469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:11.841885   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:11.841929   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:11.885703   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:11.885741   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.428276   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:27:14.433359   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:27:14.434430   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:27:14.434448   75908 api_server.go:131] duration metric: took 3.864464723s to wait for apiserver health ...
	I0828 18:27:14.434458   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:27:14.434487   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:14.434545   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:14.472125   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.472153   75908 cri.go:89] found id: ""
	I0828 18:27:14.472163   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:14.472225   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.476217   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:14.476281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:14.514886   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:14.514904   75908 cri.go:89] found id: ""
	I0828 18:27:14.514911   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:14.514965   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.518930   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:14.519000   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:14.556279   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.556302   75908 cri.go:89] found id: ""
	I0828 18:27:14.556311   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:14.556356   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.560542   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:14.560612   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:14.604981   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:14.605008   75908 cri.go:89] found id: ""
	I0828 18:27:14.605017   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:14.605076   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.608769   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:14.608833   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:14.644014   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:14.644036   75908 cri.go:89] found id: ""
	I0828 18:27:14.644044   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:14.644089   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.648138   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:14.648211   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:14.686898   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:14.686919   75908 cri.go:89] found id: ""
	I0828 18:27:14.686926   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:14.686971   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.690752   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:14.690818   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:14.723146   75908 cri.go:89] found id: ""
	I0828 18:27:14.723174   75908 logs.go:276] 0 containers: []
	W0828 18:27:14.723185   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:14.723200   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:14.723264   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:14.758168   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.758196   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:14.758202   75908 cri.go:89] found id: ""
	I0828 18:27:14.758212   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:14.758269   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.761928   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.765388   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:14.765407   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.798567   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:14.798598   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:14.841992   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:14.842024   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:14.947020   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:14.947050   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.996788   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:14.996815   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:15.031706   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:15.031731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:15.065813   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:15.065839   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:15.121439   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:15.121469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:15.535661   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:15.535709   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:15.603334   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:15.603374   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:15.619628   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:15.619657   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:15.661179   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:15.661203   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:15.697954   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:15.697983   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:18.238105   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:27:18.238137   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.238144   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.238149   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.238154   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.238158   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.238163   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.238171   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.238177   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.238187   75908 system_pods.go:74] duration metric: took 3.803722719s to wait for pod list to return data ...
	I0828 18:27:18.238198   75908 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:27:18.240936   75908 default_sa.go:45] found service account: "default"
	I0828 18:27:18.240955   75908 default_sa.go:55] duration metric: took 2.749733ms for default service account to be created ...
	I0828 18:27:18.240963   75908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:27:18.245768   75908 system_pods.go:86] 8 kube-system pods found
	I0828 18:27:18.245793   75908 system_pods.go:89] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.245800   75908 system_pods.go:89] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.245806   75908 system_pods.go:89] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.245810   75908 system_pods.go:89] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.245815   75908 system_pods.go:89] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.245820   75908 system_pods.go:89] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.245829   75908 system_pods.go:89] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.245838   75908 system_pods.go:89] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.245851   75908 system_pods.go:126] duration metric: took 4.881291ms to wait for k8s-apps to be running ...
	I0828 18:27:18.245862   75908 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:27:18.245909   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:27:18.260429   75908 system_svc.go:56] duration metric: took 14.56108ms WaitForService to wait for kubelet
	I0828 18:27:18.260458   75908 kubeadm.go:582] duration metric: took 4m22.167245383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:27:18.260489   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:27:18.262765   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:27:18.262784   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:27:18.262793   75908 node_conditions.go:105] duration metric: took 2.299468ms to run NodePressure ...
	I0828 18:27:18.262803   75908 start.go:241] waiting for startup goroutines ...
	I0828 18:27:18.262810   75908 start.go:246] waiting for cluster config update ...
	I0828 18:27:18.262820   75908 start.go:255] writing updated cluster config ...
	I0828 18:27:18.263070   75908 ssh_runner.go:195] Run: rm -f paused
	I0828 18:27:18.312755   75908 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:27:18.314827   75908 out.go:177] * Done! kubectl is now configured to use "no-preload-072854" cluster and "default" namespace by default
	I0828 18:28:25.556329   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:28:25.556449   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:28:25.558031   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:28:25.558117   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:28:25.558222   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:28:25.558363   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:28:25.558517   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:28:25.558594   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:28:25.561046   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:28:25.561124   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:28:25.561179   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:28:25.561288   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:28:25.561384   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:28:25.561489   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:28:25.561562   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:28:25.561797   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:28:25.561914   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:28:25.562010   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:28:25.562230   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:28:25.562294   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:28:25.562402   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:28:25.562478   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:28:25.562554   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:28:25.562706   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:28:25.562818   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:28:25.562926   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:28:25.563006   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:28:25.563043   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:28:25.563144   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:28:25.564527   77396 out.go:235]   - Booting up control plane ...
	I0828 18:28:25.564629   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:28:25.564716   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:28:25.564816   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:28:25.564929   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:28:25.565154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:28:25.565226   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:28:25.565326   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565541   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.565660   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565895   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566002   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566184   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566245   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566411   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566473   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566629   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566636   77396 kubeadm.go:310] 
	I0828 18:28:25.566672   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:28:25.566706   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:28:25.566712   77396 kubeadm.go:310] 
	I0828 18:28:25.566740   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:28:25.566769   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:28:25.566881   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:28:25.566893   77396 kubeadm.go:310] 
	I0828 18:28:25.567033   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:28:25.567080   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:28:25.567126   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:28:25.567142   77396 kubeadm.go:310] 
	I0828 18:28:25.567276   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:28:25.567351   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:28:25.567358   77396 kubeadm.go:310] 
	I0828 18:28:25.567461   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:28:25.567534   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:28:25.567612   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:28:25.567689   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:28:25.567726   77396 kubeadm.go:310] 
	W0828 18:28:25.567820   77396 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0828 18:28:25.567858   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:28:26.036779   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:28:26.051771   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:28:26.060912   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:28:26.060932   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:28:26.060971   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:28:26.069420   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:28:26.069486   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:28:26.078268   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:28:26.086594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:28:26.086669   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:28:26.095756   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.104747   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:28:26.104809   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.113847   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:28:26.122600   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:28:26.122673   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:28:26.131697   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:28:26.338828   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:30:22.315132   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:30:22.315271   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:30:22.316887   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:30:22.316970   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:30:22.317067   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:30:22.317199   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:30:22.317289   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:30:22.317340   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:30:22.319318   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:30:22.319406   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:30:22.319461   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:30:22.319540   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:30:22.319620   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:30:22.319715   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:30:22.319791   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:30:22.319888   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:30:22.319972   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:30:22.320068   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:30:22.320161   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:30:22.320232   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:30:22.320312   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:30:22.320362   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:30:22.320411   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:30:22.320468   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:30:22.320511   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:30:22.320627   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:30:22.320748   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:30:22.320805   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:30:22.320922   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:30:22.322522   77396 out.go:235]   - Booting up control plane ...
	I0828 18:30:22.322640   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:30:22.322739   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:30:22.322843   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:30:22.322939   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:30:22.323154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:30:22.323234   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:30:22.323320   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323518   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323616   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323851   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323947   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324157   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324215   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324383   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324448   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324605   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324614   77396 kubeadm.go:310] 
	I0828 18:30:22.324651   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:30:22.324685   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:30:22.324694   77396 kubeadm.go:310] 
	I0828 18:30:22.324726   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:30:22.324755   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:30:22.324846   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:30:22.324853   77396 kubeadm.go:310] 
	I0828 18:30:22.324939   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:30:22.324971   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:30:22.325003   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:30:22.325009   77396 kubeadm.go:310] 
	I0828 18:30:22.325137   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:30:22.325259   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:30:22.325271   77396 kubeadm.go:310] 
	I0828 18:30:22.325394   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:30:22.325485   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:30:22.325599   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:30:22.325707   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:30:22.325725   77396 kubeadm.go:310] 
	I0828 18:30:22.325793   77396 kubeadm.go:394] duration metric: took 8m1.985321645s to StartCluster
	I0828 18:30:22.325845   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:30:22.325912   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:30:22.369637   77396 cri.go:89] found id: ""
	I0828 18:30:22.369669   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.369680   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:30:22.369687   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:30:22.369748   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:30:22.404363   77396 cri.go:89] found id: ""
	I0828 18:30:22.404395   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.404404   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:30:22.404412   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:30:22.404477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:30:22.439923   77396 cri.go:89] found id: ""
	I0828 18:30:22.439949   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.439956   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:30:22.439962   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:30:22.440016   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:30:22.480139   77396 cri.go:89] found id: ""
	I0828 18:30:22.480169   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.480186   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:30:22.480195   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:30:22.480255   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:30:22.517020   77396 cri.go:89] found id: ""
	I0828 18:30:22.517053   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.517064   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:30:22.517075   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:30:22.517151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:30:22.551369   77396 cri.go:89] found id: ""
	I0828 18:30:22.551391   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.551399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:30:22.551409   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:30:22.551458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:30:22.585656   77396 cri.go:89] found id: ""
	I0828 18:30:22.585686   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.585697   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:30:22.585704   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:30:22.585781   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:30:22.620157   77396 cri.go:89] found id: ""
	I0828 18:30:22.620190   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.620201   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:30:22.620212   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:30:22.620230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:30:22.634209   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:30:22.634245   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:30:22.711047   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:30:22.711082   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:30:22.711096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:30:22.816037   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:30:22.816075   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:30:22.885999   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:30:22.886029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 18:30:22.936793   77396 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0828 18:30:22.936856   77396 out.go:270] * 
	W0828 18:30:22.936920   77396 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.936941   77396 out.go:270] * 
	W0828 18:30:22.937749   77396 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:30:22.941026   77396 out.go:201] 
	W0828 18:30:22.942189   77396 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.942300   77396 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0828 18:30:22.942335   77396 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0828 18:30:22.943829   77396 out.go:201] 
	
	
	==> CRI-O <==
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.368827015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e7ac04e-536b-4af5-84c0-97de9aa1933d name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.369011793Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869405055448900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f3ce8453601b338979dcf74433a3f120cdc495b8c66e8ac2011c8489140ae8d,PodSandboxId:339674dac8537cb6f0fb38b8849472ee23c4432833ad2dd715fc4a995242ab2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869385181705073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e90c8374-18c5-4c02-8189-c6ebe492f3a8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9,PodSandboxId:20809ee4cdfe8119c310fa072101a30c43a6cbd35c62dacbf602e4cda04d2fbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869381934831120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fjclq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3279bcbb-5b7f-464a-a6d0-4206b877065b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7,PodSandboxId:50e4e0e35116c7f5c5fc03ac768d9580078e850585eeda2fbcdd750ddded5e0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869374327116719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a136ed96-1b09-43d2-94
71-fdc7f17f5760,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869374227567945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409d
f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4,PodSandboxId:fd675e7ed02eecb297684ba8ddd95119391283098c5be18e8684dfd0a61e073e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869369559519831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541cc869670d255c9
f4fd662604b4660,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb,PodSandboxId:5f45bc15601615876a0dd9ed129b6ab178fdda9c02d6e7ead3ae37e6fe2d73cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869369549106295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2838f91e3abe4d0cf13006dc6c2702,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64,PodSandboxId:f3aa6cca52c6c6724e068cb9820a28a433b018c9705a3039d9f5fb3c69cc2ed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869369520508481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980955ced74df61d81166c77aaac11ef,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83,PodSandboxId:7f52d06fd489d5c139b89a795c7c1ea626d6653b5e2b3dd3fac50bc14a2d5b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869369454773456,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214b6cc2ce1e526ab841b14896d802f3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e7ac04e-536b-4af5-84c0-97de9aa1933d name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.380090824Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=1d7f8b09-93c6-4efa-8cbe-df8276abbffd name=/runtime.v1.RuntimeService/Version
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.380171058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d7f8b09-93c6-4efa-8cbe-df8276abbffd name=/runtime.v1.RuntimeService/Version
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.407899886Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85cc77f6-249b-4c99-b8e7-ed71bef9ecd8 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.408012301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85cc77f6-249b-4c99-b8e7-ed71bef9ecd8 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.409287456Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b5c151ef-0818-4438-a129-795642fed4ee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.409721177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870180409697220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b5c151ef-0818-4438-a129-795642fed4ee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.410299892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9eaa3a85-8612-4007-b29b-39e320aa9edf name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.410394161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9eaa3a85-8612-4007-b29b-39e320aa9edf name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.410889916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869405055448900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f3ce8453601b338979dcf74433a3f120cdc495b8c66e8ac2011c8489140ae8d,PodSandboxId:339674dac8537cb6f0fb38b8849472ee23c4432833ad2dd715fc4a995242ab2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869385181705073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e90c8374-18c5-4c02-8189-c6ebe492f3a8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9,PodSandboxId:20809ee4cdfe8119c310fa072101a30c43a6cbd35c62dacbf602e4cda04d2fbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869381934831120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fjclq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3279bcbb-5b7f-464a-a6d0-4206b877065b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7,PodSandboxId:50e4e0e35116c7f5c5fc03ac768d9580078e850585eeda2fbcdd750ddded5e0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869374327116719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a136ed96-1b09-43d2-94
71-fdc7f17f5760,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869374227567945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409d
f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4,PodSandboxId:fd675e7ed02eecb297684ba8ddd95119391283098c5be18e8684dfd0a61e073e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869369559519831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541cc869670d255c9
f4fd662604b4660,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb,PodSandboxId:5f45bc15601615876a0dd9ed129b6ab178fdda9c02d6e7ead3ae37e6fe2d73cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869369549106295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2838f91e3abe4d0cf13006dc6c2702,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64,PodSandboxId:f3aa6cca52c6c6724e068cb9820a28a433b018c9705a3039d9f5fb3c69cc2ed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869369520508481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980955ced74df61d81166c77aaac11ef,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83,PodSandboxId:7f52d06fd489d5c139b89a795c7c1ea626d6653b5e2b3dd3fac50bc14a2d5b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869369454773456,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214b6cc2ce1e526ab841b14896d802f3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9eaa3a85-8612-4007-b29b-39e320aa9edf name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.452171490Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=279af53d-d9f1-4148-b4f9-c458857d9614 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.452260078Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=279af53d-d9f1-4148-b4f9-c458857d9614 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.453309445Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73c2dac1-dc2f-467d-91f3-de94820784c1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.453881395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870180453854747,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73c2dac1-dc2f-467d-91f3-de94820784c1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.454418068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16ef7b8a-16c9-430d-9e8e-9a6034f1fcb8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.454481557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16ef7b8a-16c9-430d-9e8e-9a6034f1fcb8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.454722518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869405055448900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f3ce8453601b338979dcf74433a3f120cdc495b8c66e8ac2011c8489140ae8d,PodSandboxId:339674dac8537cb6f0fb38b8849472ee23c4432833ad2dd715fc4a995242ab2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869385181705073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e90c8374-18c5-4c02-8189-c6ebe492f3a8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9,PodSandboxId:20809ee4cdfe8119c310fa072101a30c43a6cbd35c62dacbf602e4cda04d2fbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869381934831120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fjclq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3279bcbb-5b7f-464a-a6d0-4206b877065b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7,PodSandboxId:50e4e0e35116c7f5c5fc03ac768d9580078e850585eeda2fbcdd750ddded5e0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869374327116719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a136ed96-1b09-43d2-94
71-fdc7f17f5760,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869374227567945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409d
f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4,PodSandboxId:fd675e7ed02eecb297684ba8ddd95119391283098c5be18e8684dfd0a61e073e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869369559519831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541cc869670d255c9
f4fd662604b4660,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb,PodSandboxId:5f45bc15601615876a0dd9ed129b6ab178fdda9c02d6e7ead3ae37e6fe2d73cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869369549106295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2838f91e3abe4d0cf13006dc6c2702,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64,PodSandboxId:f3aa6cca52c6c6724e068cb9820a28a433b018c9705a3039d9f5fb3c69cc2ed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869369520508481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980955ced74df61d81166c77aaac11ef,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83,PodSandboxId:7f52d06fd489d5c139b89a795c7c1ea626d6653b5e2b3dd3fac50bc14a2d5b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869369454773456,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214b6cc2ce1e526ab841b14896d802f3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16ef7b8a-16c9-430d-9e8e-9a6034f1fcb8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.487773930Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9cb2e718-a3c2-4756-aca6-eeff721b4d5a name=/runtime.v1.RuntimeService/Version
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.487859131Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cb2e718-a3c2-4756-aca6-eeff721b4d5a name=/runtime.v1.RuntimeService/Version
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.489099645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=297eeab2-f333-4e08-8ee2-caef014b9717 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.489431723Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870180489403152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=297eeab2-f333-4e08-8ee2-caef014b9717 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.489880028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9c75fc9-e1d6-4ffd-a157-d3065f9d6ed6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.489932263Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9c75fc9-e1d6-4ffd-a157-d3065f9d6ed6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:36:20 no-preload-072854 crio[707]: time="2024-08-28 18:36:20.490124183Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869405055448900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f3ce8453601b338979dcf74433a3f120cdc495b8c66e8ac2011c8489140ae8d,PodSandboxId:339674dac8537cb6f0fb38b8849472ee23c4432833ad2dd715fc4a995242ab2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869385181705073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e90c8374-18c5-4c02-8189-c6ebe492f3a8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9,PodSandboxId:20809ee4cdfe8119c310fa072101a30c43a6cbd35c62dacbf602e4cda04d2fbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869381934831120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fjclq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3279bcbb-5b7f-464a-a6d0-4206b877065b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7,PodSandboxId:50e4e0e35116c7f5c5fc03ac768d9580078e850585eeda2fbcdd750ddded5e0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869374327116719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a136ed96-1b09-43d2-94
71-fdc7f17f5760,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869374227567945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409d
f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4,PodSandboxId:fd675e7ed02eecb297684ba8ddd95119391283098c5be18e8684dfd0a61e073e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869369559519831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541cc869670d255c9
f4fd662604b4660,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb,PodSandboxId:5f45bc15601615876a0dd9ed129b6ab178fdda9c02d6e7ead3ae37e6fe2d73cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869369549106295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2838f91e3abe4d0cf13006dc6c2702,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64,PodSandboxId:f3aa6cca52c6c6724e068cb9820a28a433b018c9705a3039d9f5fb3c69cc2ed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869369520508481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980955ced74df61d81166c77aaac11ef,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83,PodSandboxId:7f52d06fd489d5c139b89a795c7c1ea626d6653b5e2b3dd3fac50bc14a2d5b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869369454773456,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214b6cc2ce1e526ab841b14896d802f3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9c75fc9-e1d6-4ffd-a157-d3065f9d6ed6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	176a416d0685e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   a7204ccbcb800       storage-provisioner
	4f3ce8453601b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   339674dac8537       busybox
	b670cbb724f62       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   20809ee4cdfe8       coredns-6f6b679f8f-fjclq
	f1e183b4b26b5       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   50e4e0e35116c       kube-proxy-tfxfd
	851b142e4bcda       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   a7204ccbcb800       storage-provisioner
	4be517729ec13       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   fd675e7ed02ee       kube-controller-manager-no-preload-072854
	701d65f0dbe97       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   5f45bc1560161       etcd-no-preload-072854
	5eb6f94089b12       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   f3aa6cca52c6c       kube-scheduler-no-preload-072854
	2cb3211855569       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   7f52d06fd489d       kube-apiserver-no-preload-072854
	
	
	==> coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47460 - 61815 "HINFO IN 6038158238618917869.2219171541028845927. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012350093s
	
	
	==> describe nodes <==
	Name:               no-preload-072854
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-072854
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=no-preload-072854
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T18_13_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 18:13:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-072854
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 18:36:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 18:33:36 +0000   Wed, 28 Aug 2024 18:13:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 18:33:36 +0000   Wed, 28 Aug 2024 18:13:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 18:33:36 +0000   Wed, 28 Aug 2024 18:13:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 18:33:36 +0000   Wed, 28 Aug 2024 18:23:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.138
	  Hostname:    no-preload-072854
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 546e5317073343a7b3a22fbeb711cba0
	  System UUID:                546e5317-0733-43a7-b3a2-2fbeb711cba0
	  Boot ID:                    0132aa51-9333-4ab3-9af1-517df4f8d990
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-6f6b679f8f-fjclq                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-072854                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-072854             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-072854    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-tfxfd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-072854             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-d5x89              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-072854 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-072854 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-072854 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-072854 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-072854 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-072854 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-072854 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-072854 event: Registered Node no-preload-072854 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-072854 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-072854 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-072854 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-072854 event: Registered Node no-preload-072854 in Controller
	
	
	==> dmesg <==
	[Aug28 18:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053066] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045138] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.112880] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.935648] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.541538] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.456956] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.068431] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057051] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.213693] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.124885] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.284371] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[ +15.012638] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.070429] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.713717] systemd-fstab-generator[1411]: Ignoring "noauto" option for root device
	[  +5.308619] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.298592] systemd-fstab-generator[2035]: Ignoring "noauto" option for root device
	[  +3.722510] kauditd_printk_skb: 61 callbacks suppressed
	[Aug28 18:23] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] <==
	{"level":"info","ts":"2024-08-28T18:22:49.848124Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-28T18:22:49.863973Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-28T18:22:49.864426Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"25036535ddb62a99","initial-advertise-peer-urls":["https://192.168.61.138:2380"],"listen-peer-urls":["https://192.168.61.138:2380"],"advertise-client-urls":["https://192.168.61.138:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.138:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-28T18:22:49.864537Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-28T18:22:49.864749Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.138:2380"}
	{"level":"info","ts":"2024-08-28T18:22:49.864801Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.138:2380"}
	{"level":"info","ts":"2024-08-28T18:22:51.606857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25036535ddb62a99 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-28T18:22:51.606998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25036535ddb62a99 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-28T18:22:51.607047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25036535ddb62a99 received MsgPreVoteResp from 25036535ddb62a99 at term 2"}
	{"level":"info","ts":"2024-08-28T18:22:51.607094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25036535ddb62a99 became candidate at term 3"}
	{"level":"info","ts":"2024-08-28T18:22:51.607119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25036535ddb62a99 received MsgVoteResp from 25036535ddb62a99 at term 3"}
	{"level":"info","ts":"2024-08-28T18:22:51.607146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"25036535ddb62a99 became leader at term 3"}
	{"level":"info","ts":"2024-08-28T18:22:51.607172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 25036535ddb62a99 elected leader 25036535ddb62a99 at term 3"}
	{"level":"info","ts":"2024-08-28T18:22:51.610285Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"25036535ddb62a99","local-member-attributes":"{Name:no-preload-072854 ClientURLs:[https://192.168.61.138:2379]}","request-path":"/0/members/25036535ddb62a99/attributes","cluster-id":"2585e0d459a05355","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-28T18:22:51.610515Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T18:22:51.610654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T18:22:51.611206Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T18:22:51.611257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T18:22:51.611885Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:22:51.611923Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:22:51.612864Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-28T18:22:51.613117Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.138:2379"}
	{"level":"info","ts":"2024-08-28T18:32:51.643021Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":856}
	{"level":"info","ts":"2024-08-28T18:32:51.654338Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":856,"took":"10.499808ms","hash":259965689,"current-db-size-bytes":2662400,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2662400,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-28T18:32:51.654508Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":259965689,"revision":856,"compact-revision":-1}
	
	
	==> kernel <==
	 18:36:20 up 14 min,  0 users,  load average: 0.05, 0.04, 0.05
	Linux no-preload-072854 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] <==
	E0828 18:32:53.892853       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0828 18:32:53.892855       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:32:53.894048       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:32:53.894110       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:33:53.895048       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:33:53.895147       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0828 18:33:53.895217       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:33:53.895237       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0828 18:33:53.896288       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:33:53.896360       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:35:53.897511       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:35:53.897683       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0828 18:35:53.897772       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:35:53.897836       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:35:53.898969       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:35:53.899027       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] <==
	E0828 18:30:56.551309       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:30:56.971933       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:31:26.558196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:31:26.982714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:31:56.565175       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:31:56.989956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:32:26.571080       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:32:26.997441       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:32:56.577642       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:32:57.004955       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:33:26.584053       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:33:27.012897       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:33:36.871695       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-072854"
	I0828 18:33:54.898098       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="278.952µs"
	E0828 18:33:56.590173       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:33:57.020116       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:34:06.896856       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="111.835µs"
	E0828 18:34:26.596531       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:34:27.027514       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:34:56.602577       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:34:57.036504       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:35:26.610213       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:35:27.046231       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:35:56.616638       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:35:57.055366       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 18:22:54.528705       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 18:22:54.537296       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.138"]
	E0828 18:22:54.537408       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 18:22:54.601928       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 18:22:54.602001       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 18:22:54.602029       1 server_linux.go:169] "Using iptables Proxier"
	I0828 18:22:54.609234       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 18:22:54.609509       1 server.go:483] "Version info" version="v1.31.0"
	I0828 18:22:54.609569       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:22:54.624733       1 config.go:104] "Starting endpoint slice config controller"
	I0828 18:22:54.624835       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 18:22:54.624859       1 config.go:197] "Starting service config controller"
	I0828 18:22:54.624915       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 18:22:54.632082       1 config.go:326] "Starting node config controller"
	I0828 18:22:54.632177       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 18:22:54.725503       1 shared_informer.go:320] Caches are synced for service config
	I0828 18:22:54.725542       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 18:22:54.732313       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] <==
	I0828 18:22:50.706938       1 serving.go:386] Generated self-signed cert in-memory
	W0828 18:22:52.849647       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 18:22:52.849724       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 18:22:52.849735       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 18:22:52.849740       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 18:22:52.922307       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0828 18:22:52.926384       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:22:52.935280       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0828 18:22:52.935685       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 18:22:52.936684       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 18:22:52.935707       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0828 18:22:53.037465       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 18:35:09 no-preload-072854 kubelet[1418]: E0828 18:35:09.030977    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870109027970246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:19 no-preload-072854 kubelet[1418]: E0828 18:35:19.032763    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870119032225334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:19 no-preload-072854 kubelet[1418]: E0828 18:35:19.032805    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870119032225334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:22 no-preload-072854 kubelet[1418]: E0828 18:35:22.880547    1418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d5x89" podUID="2f77d1e5-7779-46f9-881d-ff1a6a25098e"
	Aug 28 18:35:29 no-preload-072854 kubelet[1418]: E0828 18:35:29.034788    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870129034182192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:29 no-preload-072854 kubelet[1418]: E0828 18:35:29.035269    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870129034182192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:37 no-preload-072854 kubelet[1418]: E0828 18:35:37.880421    1418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d5x89" podUID="2f77d1e5-7779-46f9-881d-ff1a6a25098e"
	Aug 28 18:35:39 no-preload-072854 kubelet[1418]: E0828 18:35:39.037127    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870139036870410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:39 no-preload-072854 kubelet[1418]: E0828 18:35:39.037166    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870139036870410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:48 no-preload-072854 kubelet[1418]: E0828 18:35:48.882461    1418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d5x89" podUID="2f77d1e5-7779-46f9-881d-ff1a6a25098e"
	Aug 28 18:35:48 no-preload-072854 kubelet[1418]: E0828 18:35:48.900231    1418 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 18:35:48 no-preload-072854 kubelet[1418]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 18:35:48 no-preload-072854 kubelet[1418]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 18:35:48 no-preload-072854 kubelet[1418]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 18:35:48 no-preload-072854 kubelet[1418]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 18:35:49 no-preload-072854 kubelet[1418]: E0828 18:35:49.038527    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870149038257899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:49 no-preload-072854 kubelet[1418]: E0828 18:35:49.038569    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870149038257899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:59 no-preload-072854 kubelet[1418]: E0828 18:35:59.040353    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870159039989014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:35:59 no-preload-072854 kubelet[1418]: E0828 18:35:59.041000    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870159039989014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:36:00 no-preload-072854 kubelet[1418]: E0828 18:36:00.880393    1418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d5x89" podUID="2f77d1e5-7779-46f9-881d-ff1a6a25098e"
	Aug 28 18:36:09 no-preload-072854 kubelet[1418]: E0828 18:36:09.042757    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870169042465361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:36:09 no-preload-072854 kubelet[1418]: E0828 18:36:09.042813    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870169042465361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:36:14 no-preload-072854 kubelet[1418]: E0828 18:36:14.880464    1418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d5x89" podUID="2f77d1e5-7779-46f9-881d-ff1a6a25098e"
	Aug 28 18:36:19 no-preload-072854 kubelet[1418]: E0828 18:36:19.044200    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870179043571346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:36:19 no-preload-072854 kubelet[1418]: E0828 18:36:19.044227    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870179043571346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] <==
	I0828 18:23:25.152722       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 18:23:25.163272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 18:23:25.163498       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 18:23:25.171784       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 18:23:25.171969       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-072854_fed95a00-6980-40fc-9ba1-308f96903ec4!
	I0828 18:23:25.175093       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"efda45f4-ed40-4df1-90a2-f5b7fe26e6b6", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-072854_fed95a00-6980-40fc-9ba1-308f96903ec4 became leader
	I0828 18:23:25.274778       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-072854_fed95a00-6980-40fc-9ba1-308f96903ec4!
	
	
	==> storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] <==
	I0828 18:22:54.401916       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0828 18:23:24.404886       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-072854 -n no-preload-072854
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-072854 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-d5x89
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-072854 describe pod metrics-server-6867b74b74-d5x89
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-072854 describe pod metrics-server-6867b74b74-d5x89: exit status 1 (67.965603ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-d5x89" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-072854 describe pod metrics-server-6867b74b74-d5x89: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:30:44.524315   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:30:44.864724   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:31:03.674670   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:31:55.397029   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:32:07.928297   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:32:26.738796   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:32:34.435306   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:33:00.240494   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:33:09.993449   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:33:18.462918   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:33:51.163589   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:33:57.502209   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:34:21.459368   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:34:23.523948   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:34:33.057609   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:35:44.865345   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:36:03.674591   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:36:55.396893   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:37:26.598808   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:37:34.435312   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:38:00.240483   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:38:09.993443   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:38:51.163041   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:39:21.458973   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:39:23.524299   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131737 -n old-k8s-version-131737
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 2 (228.227648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-131737" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 2 (219.257843ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-131737 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-131737 logs -n 25: (1.605056097s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo find                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo crio                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-647068                                       | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:14 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-072854             | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-014980            | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-640552  | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-072854                  | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC | 28 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-131737        | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-014980                 | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-640552       | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-131737             | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 18:18:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 18:18:45.197319   77396 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:18:45.197606   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197616   77396 out.go:358] Setting ErrFile to fd 2...
	I0828 18:18:45.197621   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197793   77396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:18:45.198351   77396 out.go:352] Setting JSON to false
	I0828 18:18:45.199218   77396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7271,"bootTime":1724861854,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:18:45.199316   77396 start.go:139] virtualization: kvm guest
	I0828 18:18:45.201168   77396 out.go:177] * [old-k8s-version-131737] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:18:45.202252   77396 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:18:45.202312   77396 notify.go:220] Checking for updates...
	I0828 18:18:45.204563   77396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:18:45.205713   77396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:18:45.206652   77396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:18:45.207806   77396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:18:45.208891   77396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:18:45.210308   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:18:45.210717   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.210780   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.225409   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0828 18:18:45.225806   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.226318   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.226338   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.226722   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.226903   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.228685   77396 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 18:18:45.229863   77396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:18:45.230199   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.230243   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.245150   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0828 18:18:45.245641   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.246164   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.246199   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.246486   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.246677   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.282499   77396 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 18:18:45.283789   77396 start.go:297] selected driver: kvm2
	I0828 18:18:45.283804   77396 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.283918   77396 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:18:45.284594   77396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.284693   77396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:18:45.299887   77396 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:18:45.300236   77396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:18:45.300266   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:18:45.300274   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:18:45.300308   77396 start.go:340] cluster config:
	{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.300419   77396 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.302883   77396 out.go:177] * Starting "old-k8s-version-131737" primary control-plane node in "old-k8s-version-131737" cluster
	I0828 18:18:41.610368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:44.682293   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:45.304152   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:18:45.304189   77396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:18:45.304208   77396 cache.go:56] Caching tarball of preloaded images
	I0828 18:18:45.304295   77396 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:18:45.304305   77396 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0828 18:18:45.304426   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:18:45.304608   77396 start.go:360] acquireMachinesLock for old-k8s-version-131737: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:18:50.762367   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:53.834404   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:59.914331   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:02.986351   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:09.066375   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:12.138382   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:18.218324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:21.290324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:27.370327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:30.442342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:36.522377   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:39.594396   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:45.674327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:48.746316   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:54.826346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:57.898388   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:03.978342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:07.050322   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:13.130368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:16.202305   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:22.282326   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:25.354374   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:31.434381   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:34.506312   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:40.586353   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:43.658361   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:49.738343   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:52.810329   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:58.890346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:01.962342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:08.042323   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:11.114385   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:14.118406   76435 start.go:364] duration metric: took 4m0.584080771s to acquireMachinesLock for "embed-certs-014980"
	I0828 18:21:14.118470   76435 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:14.118492   76435 fix.go:54] fixHost starting: 
	I0828 18:21:14.118808   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:14.118834   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:14.134434   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0828 18:21:14.134863   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:14.135369   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:21:14.135398   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:14.135717   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:14.135891   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:14.136052   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:21:14.137681   76435 fix.go:112] recreateIfNeeded on embed-certs-014980: state=Stopped err=<nil>
	I0828 18:21:14.137705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	W0828 18:21:14.137861   76435 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:14.139602   76435 out.go:177] * Restarting existing kvm2 VM for "embed-certs-014980" ...
	I0828 18:21:14.116153   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:14.116188   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116549   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:21:14.116581   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116758   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:21:14.118261   75908 machine.go:96] duration metric: took 4m37.42460751s to provisionDockerMachine
	I0828 18:21:14.118302   75908 fix.go:56] duration metric: took 4m37.4457415s for fixHost
	I0828 18:21:14.118309   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 4m37.445770955s
	W0828 18:21:14.118326   75908 start.go:714] error starting host: provision: host is not running
	W0828 18:21:14.118418   75908 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0828 18:21:14.118430   75908 start.go:729] Will try again in 5 seconds ...
	I0828 18:21:14.140812   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Start
	I0828 18:21:14.140967   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring networks are active...
	I0828 18:21:14.141716   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network default is active
	I0828 18:21:14.142021   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network mk-embed-certs-014980 is active
	I0828 18:21:14.142397   76435 main.go:141] libmachine: (embed-certs-014980) Getting domain xml...
	I0828 18:21:14.143109   76435 main.go:141] libmachine: (embed-certs-014980) Creating domain...
	I0828 18:21:15.352062   76435 main.go:141] libmachine: (embed-certs-014980) Waiting to get IP...
	I0828 18:21:15.352991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.353345   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.353418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.353319   77926 retry.go:31] will retry after 289.130703ms: waiting for machine to come up
	I0828 18:21:15.644017   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.644460   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.644482   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.644434   77926 retry.go:31] will retry after 240.747341ms: waiting for machine to come up
	I0828 18:21:15.886897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.887308   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.887340   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.887258   77926 retry.go:31] will retry after 467.167731ms: waiting for machine to come up
	I0828 18:21:16.355790   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.356204   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.356232   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.356160   77926 retry.go:31] will retry after 506.51967ms: waiting for machine to come up
	I0828 18:21:16.863907   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.864309   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.864343   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.864264   77926 retry.go:31] will retry after 458.679357ms: waiting for machine to come up
	I0828 18:21:17.324948   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.325436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.325462   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.325385   77926 retry.go:31] will retry after 604.433375ms: waiting for machine to come up
	I0828 18:21:17.931169   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.931568   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.931614   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.931507   77926 retry.go:31] will retry after 852.10168ms: waiting for machine to come up
	I0828 18:21:19.120844   75908 start.go:360] acquireMachinesLock for no-preload-072854: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:21:18.785312   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:18.785735   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:18.785762   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:18.785682   77926 retry.go:31] will retry after 1.332568679s: waiting for machine to come up
	I0828 18:21:20.119550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:20.119990   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:20.120016   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:20.119947   77926 retry.go:31] will retry after 1.606559109s: waiting for machine to come up
	I0828 18:21:21.727719   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:21.728147   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:21.728175   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:21.728091   77926 retry.go:31] will retry after 1.901370923s: waiting for machine to come up
	I0828 18:21:23.632187   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:23.632554   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:23.632578   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:23.632509   77926 retry.go:31] will retry after 2.387413646s: waiting for machine to come up
	I0828 18:21:26.022576   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:26.022902   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:26.022924   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:26.022862   77926 retry.go:31] will retry after 3.196331032s: waiting for machine to come up
	I0828 18:21:33.374810   76486 start.go:364] duration metric: took 4m17.539072759s to acquireMachinesLock for "default-k8s-diff-port-640552"
	I0828 18:21:33.374877   76486 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:33.374898   76486 fix.go:54] fixHost starting: 
	I0828 18:21:33.375317   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:33.375357   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:33.392734   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0828 18:21:33.393239   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:33.393761   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:21:33.393783   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:33.394131   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:33.394347   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:33.394547   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:21:33.395998   76486 fix.go:112] recreateIfNeeded on default-k8s-diff-port-640552: state=Stopped err=<nil>
	I0828 18:21:33.396038   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	W0828 18:21:33.396210   76486 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:33.398362   76486 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-640552" ...
	I0828 18:21:29.220396   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:29.220861   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:29.220897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:29.220820   77926 retry.go:31] will retry after 2.802196616s: waiting for machine to come up
	I0828 18:21:32.026808   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027298   76435 main.go:141] libmachine: (embed-certs-014980) Found IP for machine: 192.168.72.130
	I0828 18:21:32.027319   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has current primary IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027325   76435 main.go:141] libmachine: (embed-certs-014980) Reserving static IP address...
	I0828 18:21:32.027698   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.027764   76435 main.go:141] libmachine: (embed-certs-014980) DBG | skip adding static IP to network mk-embed-certs-014980 - found existing host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"}
	I0828 18:21:32.027781   76435 main.go:141] libmachine: (embed-certs-014980) Reserved static IP address: 192.168.72.130
	I0828 18:21:32.027800   76435 main.go:141] libmachine: (embed-certs-014980) Waiting for SSH to be available...
	I0828 18:21:32.027814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Getting to WaitForSSH function...
	I0828 18:21:32.029740   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030020   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.030051   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030171   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH client type: external
	I0828 18:21:32.030200   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa (-rw-------)
	I0828 18:21:32.030235   76435 main.go:141] libmachine: (embed-certs-014980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:32.030249   76435 main.go:141] libmachine: (embed-certs-014980) DBG | About to run SSH command:
	I0828 18:21:32.030264   76435 main.go:141] libmachine: (embed-certs-014980) DBG | exit 0
	I0828 18:21:32.153760   76435 main.go:141] libmachine: (embed-certs-014980) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:32.154184   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetConfigRaw
	I0828 18:21:32.154807   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.157116   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157449   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.157472   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157662   76435 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/config.json ...
	I0828 18:21:32.157857   76435 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:32.157873   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:32.158051   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.160224   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160516   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.160550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.160877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.160999   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.161141   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.161310   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.161509   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.161528   76435 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:32.270041   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:32.270070   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270351   76435 buildroot.go:166] provisioning hostname "embed-certs-014980"
	I0828 18:21:32.270375   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270568   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.273124   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273480   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.273509   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273626   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.273774   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.273941   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.274062   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.274264   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.274435   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.274448   76435 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-014980 && echo "embed-certs-014980" | sudo tee /etc/hostname
	I0828 18:21:32.401452   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-014980
	
	I0828 18:21:32.401473   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.404278   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404622   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.404672   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404785   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.405012   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405167   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405312   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.405525   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.405697   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.405714   76435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-014980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-014980/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-014980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:32.523970   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:32.523997   76435 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:32.524044   76435 buildroot.go:174] setting up certificates
	I0828 18:21:32.524054   76435 provision.go:84] configureAuth start
	I0828 18:21:32.524063   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.524374   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.527040   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527391   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.527418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527540   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.529680   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.529986   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.530006   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.530170   76435 provision.go:143] copyHostCerts
	I0828 18:21:32.530220   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:32.530237   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:32.530306   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:32.530387   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:32.530399   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:32.530423   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:32.530475   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:32.530481   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:32.530502   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:32.530556   76435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.embed-certs-014980 san=[127.0.0.1 192.168.72.130 embed-certs-014980 localhost minikube]
	I0828 18:21:32.755911   76435 provision.go:177] copyRemoteCerts
	I0828 18:21:32.755967   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:32.755990   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.758640   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.758944   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.758981   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.759123   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.759306   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.759442   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.759554   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:32.843219   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:32.867929   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0828 18:21:32.890143   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:32.911983   76435 provision.go:87] duration metric: took 387.917809ms to configureAuth
	I0828 18:21:32.912013   76435 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:32.912199   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:32.912281   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.914814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915154   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.915188   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915321   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.915550   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915717   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915899   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.916116   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.916323   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.916378   76435 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:33.137477   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:33.137500   76435 machine.go:96] duration metric: took 979.632081ms to provisionDockerMachine
	I0828 18:21:33.137513   76435 start.go:293] postStartSetup for "embed-certs-014980" (driver="kvm2")
	I0828 18:21:33.137526   76435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:33.137564   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.137847   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:33.137877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.140267   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140555   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.140584   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140731   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.140922   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.141078   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.141223   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.224499   76435 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:33.228643   76435 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:33.228672   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:33.228755   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:33.228855   76435 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:33.229038   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:33.238208   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:33.260348   76435 start.go:296] duration metric: took 122.819807ms for postStartSetup
	I0828 18:21:33.260400   76435 fix.go:56] duration metric: took 19.141917324s for fixHost
	I0828 18:21:33.260424   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.262763   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263139   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.263168   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263289   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.263482   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263659   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263871   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.264050   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:33.264216   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:33.264226   76435 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:33.374640   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869293.352212530
	
	I0828 18:21:33.374664   76435 fix.go:216] guest clock: 1724869293.352212530
	I0828 18:21:33.374687   76435 fix.go:229] Guest: 2024-08-28 18:21:33.35221253 +0000 UTC Remote: 2024-08-28 18:21:33.260405829 +0000 UTC m=+259.867297948 (delta=91.806701ms)
	I0828 18:21:33.374708   76435 fix.go:200] guest clock delta is within tolerance: 91.806701ms
	I0828 18:21:33.374713   76435 start.go:83] releasing machines lock for "embed-certs-014980", held for 19.256266619s
	I0828 18:21:33.374735   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.374991   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:33.377975   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378411   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.378436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378623   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379150   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379317   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379409   76435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:33.379465   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.379568   76435 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:33.379594   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.381991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382015   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382323   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382354   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382379   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382438   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382493   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382687   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382876   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382907   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383029   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383033   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.383145   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.508142   76435 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:33.514436   76435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:33.661055   76435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:33.666718   76435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:33.666774   76435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:33.683142   76435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:33.683169   76435 start.go:495] detecting cgroup driver to use...
	I0828 18:21:33.683253   76435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:33.698356   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:33.711626   76435 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:33.711690   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:33.724743   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:33.738782   76435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:33.852946   76435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:33.990370   76435 docker.go:233] disabling docker service ...
	I0828 18:21:33.990440   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:34.004746   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:34.017220   76435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:34.174534   76435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:34.320863   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:34.333880   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:34.351859   76435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:34.351907   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.362142   76435 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:34.362223   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.372261   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.382374   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.396994   76435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:34.412126   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.422585   76435 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.439314   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.449667   76435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:34.458389   76435 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:34.458449   76435 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:34.471501   76435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:34.480915   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:34.617633   76435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:34.731432   76435 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:34.731508   76435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:34.736417   76435 start.go:563] Will wait 60s for crictl version
	I0828 18:21:34.736464   76435 ssh_runner.go:195] Run: which crictl
	I0828 18:21:34.740213   76435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:34.776804   76435 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:34.776908   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.806826   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.837961   76435 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:33.399527   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Start
	I0828 18:21:33.399696   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring networks are active...
	I0828 18:21:33.400382   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network default is active
	I0828 18:21:33.400737   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network mk-default-k8s-diff-port-640552 is active
	I0828 18:21:33.401099   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Getting domain xml...
	I0828 18:21:33.401809   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Creating domain...
	I0828 18:21:34.684850   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting to get IP...
	I0828 18:21:34.685612   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.685998   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.686063   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.685980   78067 retry.go:31] will retry after 291.65765ms: waiting for machine to come up
	I0828 18:21:34.979550   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980029   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980051   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.979993   78067 retry.go:31] will retry after 274.75755ms: waiting for machine to come up
	I0828 18:21:35.256257   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256724   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256752   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.256666   78067 retry.go:31] will retry after 455.404257ms: waiting for machine to come up
	I0828 18:21:35.714147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714683   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714716   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.714635   78067 retry.go:31] will retry after 426.56406ms: waiting for machine to come up
	I0828 18:21:34.839157   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:34.842000   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842417   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:34.842443   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842650   76435 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:34.846628   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:34.859098   76435 kubeadm.go:883] updating cluster {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:34.859212   76435 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:34.859259   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:34.898150   76435 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:34.898233   76435 ssh_runner.go:195] Run: which lz4
	I0828 18:21:34.902220   76435 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:34.906463   76435 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:34.906498   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:36.168426   76435 crio.go:462] duration metric: took 1.26624881s to copy over tarball
	I0828 18:21:36.168514   76435 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:38.266205   76435 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.097659696s)
	I0828 18:21:38.266252   76435 crio.go:469] duration metric: took 2.097775234s to extract the tarball
	I0828 18:21:38.266264   76435 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:38.302870   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:38.349495   76435 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:38.349527   76435 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:38.349538   76435 kubeadm.go:934] updating node { 192.168.72.130 8443 v1.31.0 crio true true} ...
	I0828 18:21:38.349672   76435 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-014980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:38.349761   76435 ssh_runner.go:195] Run: crio config
	I0828 18:21:38.393310   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:38.393333   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:38.393346   76435 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:38.393367   76435 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-014980 NodeName:embed-certs-014980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:38.393502   76435 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-014980"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:38.393561   76435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:38.403059   76435 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:38.403128   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:38.411944   76435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0828 18:21:38.427006   76435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:36.143403   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143961   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.143901   78067 retry.go:31] will retry after 623.404625ms: waiting for machine to come up
	I0828 18:21:36.768738   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769339   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.769256   78067 retry.go:31] will retry after 750.082443ms: waiting for machine to come up
	I0828 18:21:37.521122   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521604   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521633   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:37.521562   78067 retry.go:31] will retry after 837.989492ms: waiting for machine to come up
	I0828 18:21:38.361659   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362111   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362140   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:38.362056   78067 retry.go:31] will retry after 1.13122193s: waiting for machine to come up
	I0828 18:21:39.495248   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495643   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495673   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:39.495578   78067 retry.go:31] will retry after 1.180862235s: waiting for machine to come up
	I0828 18:21:40.677748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678090   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678117   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:40.678045   78067 retry.go:31] will retry after 2.245023454s: waiting for machine to come up
	I0828 18:21:38.441960   76435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0828 18:21:38.457509   76435 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:38.461003   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:38.472232   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:38.591387   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:38.606911   76435 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980 for IP: 192.168.72.130
	I0828 18:21:38.606935   76435 certs.go:194] generating shared ca certs ...
	I0828 18:21:38.606957   76435 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:38.607122   76435 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:38.607186   76435 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:38.607199   76435 certs.go:256] generating profile certs ...
	I0828 18:21:38.607304   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/client.key
	I0828 18:21:38.607398   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key.f4b1f9f1
	I0828 18:21:38.607449   76435 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key
	I0828 18:21:38.607595   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:38.607634   76435 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:38.607646   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:38.607679   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:38.607726   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:38.607756   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:38.607808   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:38.608698   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:38.647796   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:38.685835   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:38.738515   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:38.769248   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0828 18:21:38.795091   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:38.816857   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:38.839153   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:38.861024   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:38.882488   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:38.905023   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:38.927997   76435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:38.945870   76435 ssh_runner.go:195] Run: openssl version
	I0828 18:21:38.951753   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:38.962635   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966847   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966895   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.972529   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:38.982689   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:38.992812   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996942   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996991   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:39.002359   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:39.012423   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:39.022765   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.026945   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.027007   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.032233   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:39.042709   76435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:39.046904   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:39.052563   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:39.057937   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:39.063465   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:39.068788   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:39.074233   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:39.079673   76435 kubeadm.go:392] StartCluster: {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:39.079776   76435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:39.079824   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.120250   76435 cri.go:89] found id: ""
	I0828 18:21:39.120331   76435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:39.130147   76435 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:39.130171   76435 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:39.130223   76435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:39.139586   76435 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:39.140642   76435 kubeconfig.go:125] found "embed-certs-014980" server: "https://192.168.72.130:8443"
	I0828 18:21:39.142695   76435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:39.152102   76435 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I0828 18:21:39.152136   76435 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:39.152149   76435 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:39.152225   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.189811   76435 cri.go:89] found id: ""
	I0828 18:21:39.189899   76435 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:39.205579   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:39.215378   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:39.215401   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:39.215451   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:21:39.225068   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:39.225136   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:39.234254   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:21:39.243009   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:39.243072   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:39.252251   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.261241   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:39.261314   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.270443   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:21:39.278999   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:39.279070   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:39.288033   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:39.297331   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:39.396232   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.225819   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.420586   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.482893   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.601563   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:40.601672   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.101955   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.602572   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.102180   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.602520   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.635705   76435 api_server.go:72] duration metric: took 2.034151361s to wait for apiserver process to appear ...
	I0828 18:21:42.635738   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:21:42.635762   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.636263   76435 api_server.go:269] stopped: https://192.168.72.130:8443/healthz: Get "https://192.168.72.130:8443/healthz": dial tcp 192.168.72.130:8443: connect: connection refused
	I0828 18:21:43.136019   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.925748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926265   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926293   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:42.926217   78067 retry.go:31] will retry after 2.565646238s: waiting for machine to come up
	I0828 18:21:45.494477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495032   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495058   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:45.494982   78067 retry.go:31] will retry after 2.418376782s: waiting for machine to come up
	I0828 18:21:45.980398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:45.980429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:45.980444   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.010352   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:46.010385   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:46.136576   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.141398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.141429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:46.635898   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.641672   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.641712   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.136295   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.142623   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:47.142657   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.636199   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.640325   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:21:47.647198   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:21:47.647226   76435 api_server.go:131] duration metric: took 5.011481159s to wait for apiserver health ...
	I0828 18:21:47.647236   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:47.647245   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:47.649638   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:21:47.650998   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:21:47.662361   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:21:47.683446   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:21:47.696066   76435 system_pods.go:59] 8 kube-system pods found
	I0828 18:21:47.696100   76435 system_pods.go:61] "coredns-6f6b679f8f-4g2n8" [9c34e013-4c11-4805-9d58-987bb130f1b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:21:47.696120   76435 system_pods.go:61] "etcd-embed-certs-014980" [164f2ce3-8df6-4e56-a959-80b08848a181] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:21:47.696133   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [c637e3e0-4e54-44b1-8eb7-ea11d3b5753a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:21:47.696143   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [2d786cc0-a0c7-430c-89e1-9889e919289d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:21:47.696149   76435 system_pods.go:61] "kube-proxy-4lz5q" [a5f2213b-6b36-4656-8a26-26903bc09441] Running
	I0828 18:21:47.696158   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [2aa3787a-7a70-4cfb-8810-9f4e02240012] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:21:47.696167   76435 system_pods.go:61] "metrics-server-6867b74b74-f56j2" [91d30fa3-cc63-4d61-8cd3-46ecc950c31f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:21:47.696176   76435 system_pods.go:61] "storage-provisioner" [54d357f5-8f8a-429b-81db-40c9f2857fde] Running
	I0828 18:21:47.696185   76435 system_pods.go:74] duration metric: took 12.718326ms to wait for pod list to return data ...
	I0828 18:21:47.696198   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:21:47.699492   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:21:47.699515   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:21:47.699528   76435 node_conditions.go:105] duration metric: took 3.324668ms to run NodePressure ...
	I0828 18:21:47.699548   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:47.970122   76435 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973854   76435 kubeadm.go:739] kubelet initialised
	I0828 18:21:47.973874   76435 kubeadm.go:740] duration metric: took 3.724056ms waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973881   76435 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:21:47.978377   76435 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:21:47.916599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.916976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.917015   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:47.916941   78067 retry.go:31] will retry after 3.1564792s: waiting for machine to come up
	I0828 18:21:52.286982   77396 start.go:364] duration metric: took 3m6.98234152s to acquireMachinesLock for "old-k8s-version-131737"
	I0828 18:21:52.287057   77396 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:52.287069   77396 fix.go:54] fixHost starting: 
	I0828 18:21:52.287554   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:52.287595   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:52.305954   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0828 18:21:52.306439   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:52.306908   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:21:52.306928   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:52.307228   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:52.307404   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:21:52.307571   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetState
	I0828 18:21:52.309284   77396 fix.go:112] recreateIfNeeded on old-k8s-version-131737: state=Stopped err=<nil>
	I0828 18:21:52.309322   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	W0828 18:21:52.309508   77396 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:52.311369   77396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-131737" ...
	I0828 18:21:49.984379   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.985536   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.075186   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.075681   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Found IP for machine: 192.168.39.226
	I0828 18:21:51.075698   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserving static IP address...
	I0828 18:21:51.075746   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has current primary IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.076159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.076184   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | skip adding static IP to network mk-default-k8s-diff-port-640552 - found existing host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"}
	I0828 18:21:51.076201   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserved static IP address: 192.168.39.226
	I0828 18:21:51.076218   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for SSH to be available...
	I0828 18:21:51.076230   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Getting to WaitForSSH function...
	I0828 18:21:51.078435   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078745   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.078766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078967   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH client type: external
	I0828 18:21:51.079000   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa (-rw-------)
	I0828 18:21:51.079053   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:51.079079   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | About to run SSH command:
	I0828 18:21:51.079114   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | exit 0
	I0828 18:21:51.205844   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:51.206145   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetConfigRaw
	I0828 18:21:51.206821   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.209159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.209563   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209753   76486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/config.json ...
	I0828 18:21:51.209980   76486 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:51.209999   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:51.210244   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.212321   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212651   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.212677   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212800   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.212971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213273   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.213408   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.213639   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.213650   76486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:51.330211   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:51.330249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330530   76486 buildroot.go:166] provisioning hostname "default-k8s-diff-port-640552"
	I0828 18:21:51.330558   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330820   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.333492   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.333855   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.333885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.334027   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.334249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334469   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334658   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.334844   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.335003   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.335015   76486 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-640552 && echo "default-k8s-diff-port-640552" | sudo tee /etc/hostname
	I0828 18:21:51.459660   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-640552
	
	I0828 18:21:51.459690   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.462286   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462636   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.462668   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462842   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.463034   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463181   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463307   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.463470   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.463650   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.463682   76486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-640552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-640552/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-640552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:51.581714   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:51.581740   76486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:51.581777   76486 buildroot.go:174] setting up certificates
	I0828 18:21:51.581792   76486 provision.go:84] configureAuth start
	I0828 18:21:51.581807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.582130   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.584626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.584945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.584976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.585073   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.587285   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587672   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.587700   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587868   76486 provision.go:143] copyHostCerts
	I0828 18:21:51.587926   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:51.587946   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:51.588003   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:51.588092   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:51.588100   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:51.588124   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:51.588244   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:51.588255   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:51.588277   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:51.588332   76486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-640552 san=[127.0.0.1 192.168.39.226 default-k8s-diff-port-640552 localhost minikube]
	I0828 18:21:51.657408   76486 provision.go:177] copyRemoteCerts
	I0828 18:21:51.657457   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:51.657480   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.660152   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660494   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.660514   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660709   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.660911   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.661078   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.661251   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:51.751729   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:51.773473   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0828 18:21:51.796174   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:51.817640   76486 provision.go:87] duration metric: took 235.828003ms to configureAuth
	I0828 18:21:51.817672   76486 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:51.817892   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:51.817983   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.820433   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.820780   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.820807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.821016   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.821214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821371   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821533   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.821684   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.821852   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.821870   76486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:52.048026   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:52.048055   76486 machine.go:96] duration metric: took 838.061836ms to provisionDockerMachine
	I0828 18:21:52.048067   76486 start.go:293] postStartSetup for "default-k8s-diff-port-640552" (driver="kvm2")
	I0828 18:21:52.048078   76486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:52.048097   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.048437   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:52.048472   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.051115   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051385   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.051410   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051597   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.051815   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.051971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.052066   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.136350   76486 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:52.140200   76486 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:52.140228   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:52.140303   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:52.140397   76486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:52.140496   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:52.149451   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:52.172381   76486 start.go:296] duration metric: took 124.302384ms for postStartSetup
	I0828 18:21:52.172450   76486 fix.go:56] duration metric: took 18.797536411s for fixHost
	I0828 18:21:52.172477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.174891   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175255   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.175274   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175474   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.175631   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175788   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.176100   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:52.176279   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:52.176289   76486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:52.286801   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869312.259614140
	
	I0828 18:21:52.286827   76486 fix.go:216] guest clock: 1724869312.259614140
	I0828 18:21:52.286835   76486 fix.go:229] Guest: 2024-08-28 18:21:52.25961414 +0000 UTC Remote: 2024-08-28 18:21:52.172457684 +0000 UTC m=+276.471609311 (delta=87.156456ms)
	I0828 18:21:52.286854   76486 fix.go:200] guest clock delta is within tolerance: 87.156456ms
	I0828 18:21:52.286859   76486 start.go:83] releasing machines lock for "default-k8s-diff-port-640552", held for 18.912007294s
	I0828 18:21:52.286884   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.287148   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:52.289951   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290346   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.290370   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290500   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.290976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291228   76486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:52.291282   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.291325   76486 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:52.291344   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.294010   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294039   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294464   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294490   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294637   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294685   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294896   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295185   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295331   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295326   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.295560   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.380284   76486 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:52.421868   76486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:52.563478   76486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:52.569318   76486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:52.569408   76486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:52.585683   76486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:52.585709   76486 start.go:495] detecting cgroup driver to use...
	I0828 18:21:52.585781   76486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:52.603511   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:52.616868   76486 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:52.616930   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:52.631574   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:52.644913   76486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:52.762863   76486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:52.920107   76486 docker.go:233] disabling docker service ...
	I0828 18:21:52.920183   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:52.937155   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:52.951124   76486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:53.063496   76486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:53.187655   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:53.201452   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:53.219663   76486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:53.219734   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.230165   76486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:53.230247   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.240480   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.251258   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.262763   76486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:53.273597   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.283571   76486 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.302935   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.313508   76486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:53.322781   76486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:53.322850   76486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:53.337049   76486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:53.347349   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:53.455027   76486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:53.551547   76486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:53.551607   76486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:53.556960   76486 start.go:563] Will wait 60s for crictl version
	I0828 18:21:53.557066   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:21:53.560695   76486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:53.603636   76486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:53.603721   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.632017   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.664760   76486 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:52.312648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .Start
	I0828 18:21:52.312862   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring networks are active...
	I0828 18:21:52.313682   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network default is active
	I0828 18:21:52.314112   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network mk-old-k8s-version-131737 is active
	I0828 18:21:52.314488   77396 main.go:141] libmachine: (old-k8s-version-131737) Getting domain xml...
	I0828 18:21:52.315180   77396 main.go:141] libmachine: (old-k8s-version-131737) Creating domain...
	I0828 18:21:53.582013   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting to get IP...
	I0828 18:21:53.583124   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.583609   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.583672   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.583582   78246 retry.go:31] will retry after 289.679773ms: waiting for machine to come up
	I0828 18:21:53.875299   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.876115   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.876144   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.876051   78246 retry.go:31] will retry after 263.317798ms: waiting for machine to come up
	I0828 18:21:54.141733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.142310   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.142340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.142257   78246 retry.go:31] will retry after 440.224905ms: waiting for machine to come up
	I0828 18:21:54.584505   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.585061   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.585084   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.585018   78246 retry.go:31] will retry after 379.546405ms: waiting for machine to come up
	I0828 18:21:54.966516   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.967130   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.967153   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.967045   78246 retry.go:31] will retry after 754.463377ms: waiting for machine to come up
	I0828 18:21:53.665810   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:53.668882   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669330   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:53.669352   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669589   76486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:53.673693   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:53.685432   76486 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:53.685546   76486 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:53.685593   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:53.720069   76486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:53.720129   76486 ssh_runner.go:195] Run: which lz4
	I0828 18:21:53.723841   76486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:53.727666   76486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:53.727697   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:54.993725   76486 crio.go:462] duration metric: took 1.269921848s to copy over tarball
	I0828 18:21:54.993802   76486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:53.987677   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:56.485568   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:55.723533   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:55.724021   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:55.724042   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:55.723980   78246 retry.go:31] will retry after 607.743145ms: waiting for machine to come up
	I0828 18:21:56.333733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:56.334181   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:56.334210   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:56.334135   78246 retry.go:31] will retry after 1.098394488s: waiting for machine to come up
	I0828 18:21:57.433729   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:57.434212   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:57.434243   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:57.434157   78246 retry.go:31] will retry after 1.195993343s: waiting for machine to come up
	I0828 18:21:58.631451   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:58.631839   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:58.631867   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:58.631798   78246 retry.go:31] will retry after 1.807712472s: waiting for machine to come up
	I0828 18:21:57.135009   76486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.141177811s)
	I0828 18:21:57.135041   76486 crio.go:469] duration metric: took 2.141292479s to extract the tarball
	I0828 18:21:57.135051   76486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:57.172381   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:57.211971   76486 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:57.211993   76486 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:57.212003   76486 kubeadm.go:934] updating node { 192.168.39.226 8444 v1.31.0 crio true true} ...
	I0828 18:21:57.212123   76486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-640552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:57.212202   76486 ssh_runner.go:195] Run: crio config
	I0828 18:21:57.254347   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:21:57.254378   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:57.254402   76486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:57.254431   76486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-640552 NodeName:default-k8s-diff-port-640552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:57.254630   76486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-640552"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:57.254715   76486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:57.264233   76486 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:57.264304   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:57.273293   76486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0828 18:21:57.289211   76486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:57.304829   76486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0828 18:21:57.323062   76486 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:57.326891   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:57.339775   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:57.463802   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:57.479266   76486 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552 for IP: 192.168.39.226
	I0828 18:21:57.479288   76486 certs.go:194] generating shared ca certs ...
	I0828 18:21:57.479325   76486 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:57.479519   76486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:57.479570   76486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:57.479584   76486 certs.go:256] generating profile certs ...
	I0828 18:21:57.479702   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/client.key
	I0828 18:21:57.479774   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key.90f46fd7
	I0828 18:21:57.479829   76486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key
	I0828 18:21:57.479977   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:57.480018   76486 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:57.480031   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:57.480071   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:57.480109   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:57.480142   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:57.480199   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:57.481063   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:57.514802   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:57.555506   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:57.585381   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:57.613009   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0828 18:21:57.637776   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:57.662590   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:57.684482   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:57.707287   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:57.728392   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:57.750217   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:57.771310   76486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:57.786814   76486 ssh_runner.go:195] Run: openssl version
	I0828 18:21:57.792053   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:57.802301   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806552   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806627   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.812238   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:57.824231   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:57.834783   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.838954   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.839008   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.844456   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:57.856262   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:57.867737   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872040   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872122   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.877506   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:57.889018   76486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:57.893303   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:57.899199   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:57.907716   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:57.915801   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:57.923795   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:57.929601   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:57.935563   76486 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:57.935655   76486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:57.935698   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:57.975236   76486 cri.go:89] found id: ""
	I0828 18:21:57.975308   76486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:57.986945   76486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:57.986966   76486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:57.987014   76486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:57.996355   76486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:57.997293   76486 kubeconfig.go:125] found "default-k8s-diff-port-640552" server: "https://192.168.39.226:8444"
	I0828 18:21:57.999257   76486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:58.008531   76486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.226
	I0828 18:21:58.008555   76486 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:58.008564   76486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:58.008612   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:58.054603   76486 cri.go:89] found id: ""
	I0828 18:21:58.054681   76486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:58.072017   76486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:58.085982   76486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:58.086007   76486 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:58.086087   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0828 18:21:58.094721   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:58.094790   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:58.108457   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0828 18:21:58.120495   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:58.120568   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:58.130432   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.139428   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:58.139495   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.148537   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0828 18:21:58.157182   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:58.157241   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:58.166178   76486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:58.175176   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:58.276043   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.072360   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.270937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.344719   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.442568   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:59.442664   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:59.942860   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:00.443271   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:58.485632   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:00.694313   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:00.694341   76435 pod_ready.go:82] duration metric: took 12.71594065s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.694354   76435 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210752   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.210805   76435 pod_ready.go:82] duration metric: took 516.442507ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210821   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218781   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.218809   76435 pod_ready.go:82] duration metric: took 7.979295ms for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218823   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725883   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.725914   76435 pod_ready.go:82] duration metric: took 507.08194ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725932   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731866   76435 pod_ready.go:93] pod "kube-proxy-4lz5q" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.731891   76435 pod_ready.go:82] duration metric: took 5.951323ms for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731903   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737160   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.737191   76435 pod_ready.go:82] duration metric: took 5.279341ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737203   76435 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.441679   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:00.442149   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:00.442178   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:00.442063   78246 retry.go:31] will retry after 2.175897132s: waiting for machine to come up
	I0828 18:22:02.620076   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:02.620562   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:02.620589   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:02.620527   78246 retry.go:31] will retry after 1.749248103s: waiting for machine to come up
	I0828 18:22:04.371390   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:04.371924   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:04.371969   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:04.371875   78246 retry.go:31] will retry after 2.412168623s: waiting for machine to come up
	I0828 18:22:00.943566   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.443708   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.943361   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.957227   76486 api_server.go:72] duration metric: took 2.514666609s to wait for apiserver process to appear ...
	I0828 18:22:01.957258   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:01.957281   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.174923   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.174955   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.174970   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.227506   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.227540   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.457869   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.463842   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.463884   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:04.957398   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.964576   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.964606   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:05.457724   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:05.461808   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:22:05.467732   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:05.467757   76486 api_server.go:131] duration metric: took 3.510492089s to wait for apiserver health ...
	I0828 18:22:05.467766   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:22:05.467771   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:05.469553   76486 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:05.470759   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:05.481858   76486 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:05.500789   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:05.512306   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:05.512336   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:05.512343   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:05.512353   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:05.512360   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:05.512368   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:05.512379   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:05.512386   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:05.512396   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:05.512405   76486 system_pods.go:74] duration metric: took 11.592471ms to wait for pod list to return data ...
	I0828 18:22:05.512419   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:05.516136   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:05.516167   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:05.516182   76486 node_conditions.go:105] duration metric: took 3.757746ms to run NodePressure ...
	I0828 18:22:05.516205   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:05.793448   76486 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798810   76486 kubeadm.go:739] kubelet initialised
	I0828 18:22:05.798827   76486 kubeadm.go:740] duration metric: took 5.35696ms waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798835   76486 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:05.803644   76486 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.808185   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808206   76486 pod_ready.go:82] duration metric: took 4.541551ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.808214   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808226   76486 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.812918   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812941   76486 pod_ready.go:82] duration metric: took 4.703246ms for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.812950   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812956   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.817019   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817036   76486 pod_ready.go:82] duration metric: took 4.075009ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.817045   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817050   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.904575   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904606   76486 pod_ready.go:82] duration metric: took 87.547744ms for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.904621   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904628   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.304141   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304168   76486 pod_ready.go:82] duration metric: took 399.53302ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.304177   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304182   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.704632   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704663   76486 pod_ready.go:82] duration metric: took 400.470144ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.704677   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704686   76486 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:07.104218   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104247   76486 pod_ready.go:82] duration metric: took 399.550393ms for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:07.104261   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104270   76486 pod_ready.go:39] duration metric: took 1.305425633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:07.104296   76486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:07.115055   76486 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:07.115077   76486 kubeadm.go:597] duration metric: took 9.128104649s to restartPrimaryControlPlane
	I0828 18:22:07.115085   76486 kubeadm.go:394] duration metric: took 9.179528813s to StartCluster
	I0828 18:22:07.115105   76486 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.115169   76486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:07.116738   76486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.116962   76486 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:07.117026   76486 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:07.117104   76486 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117121   76486 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117136   76486 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117150   76486 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:07.117175   76486 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-640552"
	I0828 18:22:07.117185   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117191   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:07.117166   76486 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117280   76486 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117291   76486 addons.go:243] addon metrics-server should already be in state true
	I0828 18:22:07.117316   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117551   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117585   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117607   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117622   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117666   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117687   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.118665   76486 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:07.119962   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0828 18:22:07.133468   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133474   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133473   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0828 18:22:07.133904   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.134022   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134039   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134044   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134055   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134378   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134405   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134416   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134425   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134582   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.134742   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134990   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135019   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.135331   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135358   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.142488   76486 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.142508   76486 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:07.142534   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.142790   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.142845   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.151553   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0828 18:22:07.152067   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.152561   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.152578   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.152988   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.153172   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.153267   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0828 18:22:07.153647   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.154071   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.154118   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.154424   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.154657   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.155656   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.156384   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.158167   76486 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:07.158170   76486 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:03.743115   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:06.246448   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:07.159313   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0828 18:22:07.159655   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.159730   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:07.159748   76486 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:07.159766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.159877   76486 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.159893   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:07.159908   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.160069   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.160087   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.160501   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.160999   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.161042   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.163522   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163560   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163954   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163960   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163980   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163989   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.164249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164451   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164455   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164575   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164746   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.164806   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.177679   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0828 18:22:07.178179   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.178711   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.178732   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.179027   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.179214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.180671   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.180897   76486 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.180912   76486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:07.180931   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.183194   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183530   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.183619   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183784   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.183935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.184064   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.184197   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.320359   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:07.338447   76486 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:07.422788   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.478274   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:07.478295   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:07.481718   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.539263   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:07.539287   76486 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:07.610393   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:07.610415   76486 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:07.671875   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:08.436371   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436397   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436468   76486 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.013643707s)
	I0828 18:22:08.436507   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436690   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436708   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436720   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436728   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436823   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.436836   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436848   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436857   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436866   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436939   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436952   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.437124   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.437174   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.437198   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.442852   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.442871   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.443116   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.443135   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601340   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601386   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601681   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.601728   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601743   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601753   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601998   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.602020   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.602030   76486 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-640552"
	I0828 18:22:08.603833   76486 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:06.787073   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:06.787468   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:06.787506   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:06.787418   78246 retry.go:31] will retry after 3.844761666s: waiting for machine to come up
	I0828 18:22:08.605028   76486 addons.go:510] duration metric: took 1.488006928s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:09.342263   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:11.990693   75908 start.go:364] duration metric: took 52.869802321s to acquireMachinesLock for "no-preload-072854"
	I0828 18:22:11.990749   75908 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:22:11.990756   75908 fix.go:54] fixHost starting: 
	I0828 18:22:11.991173   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:11.991211   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:12.008247   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0828 18:22:12.008729   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:12.009170   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:12.009193   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:12.009534   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:12.009732   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:12.009873   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:12.011416   75908 fix.go:112] recreateIfNeeded on no-preload-072854: state=Stopped err=<nil>
	I0828 18:22:12.011442   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	W0828 18:22:12.011599   75908 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:22:12.013401   75908 out.go:177] * Restarting existing kvm2 VM for "no-preload-072854" ...
	I0828 18:22:08.747994   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:11.243666   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:13.245991   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:10.635599   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.635992   77396 main.go:141] libmachine: (old-k8s-version-131737) Found IP for machine: 192.168.50.99
	I0828 18:22:10.636017   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserving static IP address...
	I0828 18:22:10.636035   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has current primary IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.636476   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserved static IP address: 192.168.50.99
	I0828 18:22:10.636507   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting for SSH to be available...
	I0828 18:22:10.636529   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.636550   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | skip adding static IP to network mk-old-k8s-version-131737 - found existing host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"}
	I0828 18:22:10.636565   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Getting to WaitForSSH function...
	I0828 18:22:10.638762   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639118   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.639150   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639274   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH client type: external
	I0828 18:22:10.639295   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa (-rw-------)
	I0828 18:22:10.639324   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:10.639340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | About to run SSH command:
	I0828 18:22:10.639368   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | exit 0
	I0828 18:22:10.765932   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:10.766339   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetConfigRaw
	I0828 18:22:10.767003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:10.769525   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770006   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.770045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770184   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:22:10.770396   77396 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:10.770418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:10.770671   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.772685   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773010   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.773031   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773182   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.773396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773583   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773739   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.773904   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.774112   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.774125   77396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:10.874115   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:10.874150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874366   77396 buildroot.go:166] provisioning hostname "old-k8s-version-131737"
	I0828 18:22:10.874396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874600   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.876804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877106   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.877132   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877237   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.877445   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877604   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877763   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.877921   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.878123   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.878139   77396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-131737 && echo "old-k8s-version-131737" | sudo tee /etc/hostname
	I0828 18:22:10.999107   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-131737
	
	I0828 18:22:10.999144   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.002327   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.002771   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.002802   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.003036   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.003221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003425   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003610   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.003769   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.003968   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.003986   77396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-131737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-131737/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-131737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:11.119461   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:11.119493   77396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:11.119523   77396 buildroot.go:174] setting up certificates
	I0828 18:22:11.119535   77396 provision.go:84] configureAuth start
	I0828 18:22:11.119547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:11.119813   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.122564   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.122916   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.122945   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.123121   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.125575   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.125946   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.125973   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.126103   77396 provision.go:143] copyHostCerts
	I0828 18:22:11.126169   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:11.126192   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:11.126258   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:11.126390   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:11.126416   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:11.126453   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:11.126551   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:11.126565   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:11.126596   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:11.126678   77396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-131737 san=[127.0.0.1 192.168.50.99 localhost minikube old-k8s-version-131737]
	I0828 18:22:11.382096   77396 provision.go:177] copyRemoteCerts
	I0828 18:22:11.382161   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:11.382189   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.384698   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.385071   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.385394   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.385527   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.385669   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.463818   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:11.487677   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0828 18:22:11.510454   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 18:22:11.532302   77396 provision.go:87] duration metric: took 412.75597ms to configureAuth
	I0828 18:22:11.532331   77396 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:11.532551   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:22:11.532627   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.535284   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535668   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.535700   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535816   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.536003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536138   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536317   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.536444   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.536599   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.536626   77396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:11.757267   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:11.757297   77396 machine.go:96] duration metric: took 986.887935ms to provisionDockerMachine
	I0828 18:22:11.757311   77396 start.go:293] postStartSetup for "old-k8s-version-131737" (driver="kvm2")
	I0828 18:22:11.757325   77396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:11.757341   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.757701   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:11.757761   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.760433   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760764   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.760804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760949   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.761117   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.761288   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.761467   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.842091   77396 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:11.846271   77396 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:11.846294   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:11.846357   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:11.846452   77396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:11.846590   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:11.856373   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:11.879153   77396 start.go:296] duration metric: took 121.830018ms for postStartSetup
	I0828 18:22:11.879193   77396 fix.go:56] duration metric: took 19.592124568s for fixHost
	I0828 18:22:11.879218   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.882110   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882588   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.882638   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882814   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.883017   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883241   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883383   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.883540   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.883704   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.883715   77396 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:11.990532   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869331.947970723
	
	I0828 18:22:11.990563   77396 fix.go:216] guest clock: 1724869331.947970723
	I0828 18:22:11.990574   77396 fix.go:229] Guest: 2024-08-28 18:22:11.947970723 +0000 UTC Remote: 2024-08-28 18:22:11.879198847 +0000 UTC m=+206.714077766 (delta=68.771876ms)
	I0828 18:22:11.990599   77396 fix.go:200] guest clock delta is within tolerance: 68.771876ms
	I0828 18:22:11.990605   77396 start.go:83] releasing machines lock for "old-k8s-version-131737", held for 19.703582254s
	I0828 18:22:11.990648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.990935   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.993283   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993690   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.993725   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993908   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994630   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994718   77396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:11.994768   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.994836   77396 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:11.994864   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.997521   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997693   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997952   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.997974   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998001   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.998022   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998251   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998384   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998466   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998650   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998665   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.998813   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:12.079201   77396 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:12.116862   77396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:12.268437   77396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:12.274689   77396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:12.274768   77396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:12.299532   77396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:12.299561   77396 start.go:495] detecting cgroup driver to use...
	I0828 18:22:12.299633   77396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:12.321322   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:12.336273   77396 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:12.336345   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:12.350625   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:12.364155   77396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:12.475639   77396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:12.636052   77396 docker.go:233] disabling docker service ...
	I0828 18:22:12.636144   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:12.655431   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:12.673744   77396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:12.865232   77396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:12.993530   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:13.006666   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:13.023529   77396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0828 18:22:13.023617   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.032944   77396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:13.033014   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.042494   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.052172   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.062869   77396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:13.073254   77396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:13.081968   77396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:13.082032   77396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:13.096163   77396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:13.106942   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:13.229752   77396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:13.333809   77396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:13.333870   77396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:13.339539   77396 start.go:563] Will wait 60s for crictl version
	I0828 18:22:13.339615   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:13.343618   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:13.387552   77396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:13.387647   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.417440   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.451222   77396 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0828 18:22:13.452432   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:13.455750   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456127   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:13.456158   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456465   77396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:13.460719   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:13.474168   77396 kubeadm.go:883] updating cluster {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:13.474315   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:22:13.474381   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:13.519869   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:13.519940   77396 ssh_runner.go:195] Run: which lz4
	I0828 18:22:13.524479   77396 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:22:13.528475   77396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:22:13.528511   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0828 18:22:15.039582   77396 crio.go:462] duration metric: took 1.515144029s to copy over tarball
	I0828 18:22:15.039666   77396 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:22:11.342592   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:13.343159   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:14.844412   76486 node_ready.go:49] node "default-k8s-diff-port-640552" has status "Ready":"True"
	I0828 18:22:14.844443   76486 node_ready.go:38] duration metric: took 7.505958149s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:14.844457   76486 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:14.852970   76486 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858426   76486 pod_ready.go:93] pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:14.858454   76486 pod_ready.go:82] duration metric: took 5.455024ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858467   76486 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:12.014690   75908 main.go:141] libmachine: (no-preload-072854) Calling .Start
	I0828 18:22:12.014870   75908 main.go:141] libmachine: (no-preload-072854) Ensuring networks are active...
	I0828 18:22:12.015716   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network default is active
	I0828 18:22:12.016229   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network mk-no-preload-072854 is active
	I0828 18:22:12.016663   75908 main.go:141] libmachine: (no-preload-072854) Getting domain xml...
	I0828 18:22:12.017534   75908 main.go:141] libmachine: (no-preload-072854) Creating domain...
	I0828 18:22:13.381018   75908 main.go:141] libmachine: (no-preload-072854) Waiting to get IP...
	I0828 18:22:13.381905   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.382463   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.382515   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.382439   78447 retry.go:31] will retry after 308.332294ms: waiting for machine to come up
	I0828 18:22:13.692047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.692496   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.692537   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.692434   78447 retry.go:31] will retry after 374.325088ms: waiting for machine to come up
	I0828 18:22:14.068154   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.068770   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.068799   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.068736   78447 retry.go:31] will retry after 465.939187ms: waiting for machine to come up
	I0828 18:22:14.536497   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.537032   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.537055   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.536989   78447 retry.go:31] will retry after 374.795357ms: waiting for machine to come up
	I0828 18:22:14.913413   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.914015   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.914047   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.913964   78447 retry.go:31] will retry after 726.118647ms: waiting for machine to come up
	I0828 18:22:15.641971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:15.642532   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:15.642559   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:15.642483   78447 retry.go:31] will retry after 951.90632ms: waiting for machine to come up
	I0828 18:22:15.745367   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.244292   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.094470   77396 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.054779864s)
	I0828 18:22:18.094500   77396 crio.go:469] duration metric: took 3.054883651s to extract the tarball
	I0828 18:22:18.094507   77396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:22:18.138235   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:18.172461   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:18.172484   77396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:18.172527   77396 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.172572   77396 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.172589   77396 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.172646   77396 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0828 18:22:18.172819   77396 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.172608   77396 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.172823   77396 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.172990   77396 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174545   77396 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.174579   77396 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.174598   77396 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0828 18:22:18.174609   77396 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.174904   77396 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.415540   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0828 18:22:18.461528   77396 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0828 18:22:18.461577   77396 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0828 18:22:18.461617   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.466065   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.471602   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.476041   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.480111   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.484307   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.500185   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.519236   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.538341   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.614022   77396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0828 18:22:18.614068   77396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.614150   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649875   77396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0828 18:22:18.649927   77396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.649945   77396 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0828 18:22:18.649976   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649980   77396 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.650035   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.665128   77396 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0828 18:22:18.665173   77396 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.665225   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686246   77396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0828 18:22:18.686288   77396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.686303   77396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0828 18:22:18.686336   77396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.686375   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686417   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.686339   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686483   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.686527   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.686558   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.686599   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775824   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775875   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.803911   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.803983   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0828 18:22:18.822129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.822230   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.822232   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.912309   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.912514   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.912662   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:19.003169   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003183   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0828 18:22:19.003201   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:19.003137   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:19.003292   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:19.108957   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0828 18:22:19.109000   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0828 18:22:19.109047   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0828 18:22:19.108961   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0828 18:22:19.109144   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0828 18:22:19.340554   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:19.486655   77396 cache_images.go:92] duration metric: took 1.314154463s to LoadCachedImages
	W0828 18:22:19.486742   77396 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0828 18:22:19.486760   77396 kubeadm.go:934] updating node { 192.168.50.99 8443 v1.20.0 crio true true} ...
	I0828 18:22:19.486898   77396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-131737 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:19.486979   77396 ssh_runner.go:195] Run: crio config
	I0828 18:22:19.530549   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:22:19.530579   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:19.530592   77396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:19.530621   77396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.99 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-131737 NodeName:old-k8s-version-131737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0828 18:22:19.530797   77396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-131737"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:19.530870   77396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0828 18:22:19.545081   77396 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:19.545179   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:19.558002   77396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0828 18:22:19.577056   77396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:19.595848   77396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0828 18:22:19.614164   77396 ssh_runner.go:195] Run: grep 192.168.50.99	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:19.618274   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.99	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:19.631776   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:19.775809   77396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:19.793491   77396 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737 for IP: 192.168.50.99
	I0828 18:22:19.793521   77396 certs.go:194] generating shared ca certs ...
	I0828 18:22:19.793544   77396 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:19.793722   77396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:19.793776   77396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:19.793788   77396 certs.go:256] generating profile certs ...
	I0828 18:22:19.793928   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.key
	I0828 18:22:19.793993   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key.131f8aa0
	I0828 18:22:19.794043   77396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key
	I0828 18:22:19.794211   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:19.794279   77396 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:19.794292   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:19.794322   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:19.794353   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:19.794379   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:19.794447   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:19.795621   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:19.831614   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:19.874281   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:19.927912   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:19.967892   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 18:22:20.010378   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:22:20.036730   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:20.064707   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:22:20.089246   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:20.116913   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:20.151729   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:20.174509   77396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:20.190911   77396 ssh_runner.go:195] Run: openssl version
	I0828 18:22:16.865253   76486 pod_ready.go:103] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:17.867833   76486 pod_ready.go:93] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.867859   76486 pod_ready.go:82] duration metric: took 3.009384484s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.867869   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.875975   76486 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.876008   76486 pod_ready.go:82] duration metric: took 8.131826ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.876022   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883334   76486 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.883363   76486 pod_ready.go:82] duration metric: took 1.007332551s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883377   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890003   76486 pod_ready.go:93] pod "kube-proxy-lmpft" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.890032   76486 pod_ready.go:82] duration metric: took 6.647273ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890045   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895629   76486 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.895658   76486 pod_ready.go:82] duration metric: took 5.60504ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895672   76486 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:16.595708   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:16.596190   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:16.596219   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:16.596152   78447 retry.go:31] will retry after 1.127921402s: waiting for machine to come up
	I0828 18:22:17.725174   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:17.725707   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:17.725736   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:17.725653   78447 retry.go:31] will retry after 959.892711ms: waiting for machine to come up
	I0828 18:22:18.686818   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:18.687269   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:18.687291   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:18.687225   78447 retry.go:31] will retry after 1.541922737s: waiting for machine to come up
	I0828 18:22:20.231099   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:20.231669   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:20.231697   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:20.231621   78447 retry.go:31] will retry after 1.601924339s: waiting for machine to come up
	I0828 18:22:20.743848   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:22.745091   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:20.198369   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:20.208787   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213735   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213798   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.219855   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:20.230970   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:20.243428   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248105   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248169   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.253803   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:20.264495   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:20.275530   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280118   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280179   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.286135   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:20.296995   77396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:20.302843   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:20.309214   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:20.314977   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:20.321177   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:20.327689   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:20.334176   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:20.340478   77396 kubeadm.go:392] StartCluster: {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:20.340589   77396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:20.340666   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.377288   77396 cri.go:89] found id: ""
	I0828 18:22:20.377366   77396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:20.387774   77396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:20.387796   77396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:20.387846   77396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:20.398086   77396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:20.399369   77396 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-131737" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:20.400118   77396 kubeconfig.go:62] /home/jenkins/minikube-integration/19529-10317/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-131737" cluster setting kubeconfig missing "old-k8s-version-131737" context setting]
	I0828 18:22:20.401248   77396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:20.464577   77396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:20.475116   77396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.99
	I0828 18:22:20.475161   77396 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:20.475172   77396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:20.475233   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.509801   77396 cri.go:89] found id: ""
	I0828 18:22:20.509881   77396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:20.527245   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:20.537526   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:20.537548   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:20.537603   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:20.546096   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:20.546168   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:20.555608   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:20.564344   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:20.564405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:20.573551   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.582191   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:20.582248   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.592105   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:20.601563   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:20.601624   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:20.612220   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:20.621113   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:20.738800   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.351223   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.564678   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.659764   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.748789   77396 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:21.748886   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.249370   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.749578   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.249982   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.749304   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.249774   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.749363   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:20.928806   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:23.402840   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:21.835332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:21.835849   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:21.835884   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:21.835787   78447 retry.go:31] will retry after 2.437330454s: waiting for machine to come up
	I0828 18:22:24.275082   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:24.275523   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:24.275553   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:24.275493   78447 retry.go:31] will retry after 2.288360059s: waiting for machine to come up
	I0828 18:22:26.564963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:26.565404   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:26.565432   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:26.565358   78447 retry.go:31] will retry after 2.911207221s: waiting for machine to come up
	I0828 18:22:25.243485   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:27.744153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:25.249675   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.749573   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.249942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.249956   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.749065   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.249309   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.749697   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.249151   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.749206   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.902220   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:28.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.402648   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:29.479385   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479953   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has current primary IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479975   75908 main.go:141] libmachine: (no-preload-072854) Found IP for machine: 192.168.61.138
	I0828 18:22:29.479988   75908 main.go:141] libmachine: (no-preload-072854) Reserving static IP address...
	I0828 18:22:29.480455   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.480476   75908 main.go:141] libmachine: (no-preload-072854) Reserved static IP address: 192.168.61.138
	I0828 18:22:29.480490   75908 main.go:141] libmachine: (no-preload-072854) DBG | skip adding static IP to network mk-no-preload-072854 - found existing host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"}
	I0828 18:22:29.480500   75908 main.go:141] libmachine: (no-preload-072854) DBG | Getting to WaitForSSH function...
	I0828 18:22:29.480509   75908 main.go:141] libmachine: (no-preload-072854) Waiting for SSH to be available...
	I0828 18:22:29.483163   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483478   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.483509   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483617   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH client type: external
	I0828 18:22:29.483636   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa (-rw-------)
	I0828 18:22:29.483673   75908 main.go:141] libmachine: (no-preload-072854) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:29.483691   75908 main.go:141] libmachine: (no-preload-072854) DBG | About to run SSH command:
	I0828 18:22:29.483705   75908 main.go:141] libmachine: (no-preload-072854) DBG | exit 0
	I0828 18:22:29.606048   75908 main.go:141] libmachine: (no-preload-072854) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:29.606410   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetConfigRaw
	I0828 18:22:29.607071   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.609374   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609733   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.609763   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609984   75908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/config.json ...
	I0828 18:22:29.610223   75908 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:29.610245   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:29.610451   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.612963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613409   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.613431   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.613688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613988   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.614165   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.614339   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.614355   75908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:29.714325   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:29.714360   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714596   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:22:29.714621   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714829   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.717545   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.717914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.717939   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.718102   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.718312   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718513   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718676   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.718848   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.719009   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.719026   75908 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-072854 && echo "no-preload-072854" | sudo tee /etc/hostname
	I0828 18:22:29.835992   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-072854
	
	I0828 18:22:29.836024   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.839134   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839621   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.839654   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839909   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.840128   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840324   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840540   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.840742   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.840973   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.841005   75908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-072854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-072854/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-072854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:29.951089   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:29.951125   75908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:29.951149   75908 buildroot.go:174] setting up certificates
	I0828 18:22:29.951162   75908 provision.go:84] configureAuth start
	I0828 18:22:29.951178   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.951496   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.954309   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954663   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.954694   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.957076   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957345   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.957365   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957550   75908 provision.go:143] copyHostCerts
	I0828 18:22:29.957606   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:29.957624   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:29.957683   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:29.957792   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:29.957807   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:29.957831   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:29.957913   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:29.957924   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:29.957951   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:29.958060   75908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.no-preload-072854 san=[127.0.0.1 192.168.61.138 localhost minikube no-preload-072854]
	I0828 18:22:30.038643   75908 provision.go:177] copyRemoteCerts
	I0828 18:22:30.038705   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:30.038730   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.041574   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.041914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.041946   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.042125   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.042306   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.042460   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.042618   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.124224   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:30.148835   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0828 18:22:30.171599   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:22:30.195349   75908 provision.go:87] duration metric: took 244.171371ms to configureAuth
	I0828 18:22:30.195375   75908 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:30.195580   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:30.195665   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.198535   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.198938   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.198961   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.199171   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.199349   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199490   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199727   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.199917   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.200104   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.200125   75908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:30.422282   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:30.422314   75908 machine.go:96] duration metric: took 812.07707ms to provisionDockerMachine
	I0828 18:22:30.422328   75908 start.go:293] postStartSetup for "no-preload-072854" (driver="kvm2")
	I0828 18:22:30.422341   75908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:30.422361   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.422658   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:30.422688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.425627   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426006   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.426047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426199   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.426401   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.426539   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.426675   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.508399   75908 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:30.512395   75908 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:30.512418   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:30.512505   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:30.512603   75908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:30.512723   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:30.522105   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:30.545166   75908 start.go:296] duration metric: took 122.822966ms for postStartSetup
	I0828 18:22:30.545203   75908 fix.go:56] duration metric: took 18.554447914s for fixHost
	I0828 18:22:30.545221   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.548255   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548658   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.548683   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548867   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.549078   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549251   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549378   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.549555   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.549774   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.549788   75908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:30.650663   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869350.622150588
	
	I0828 18:22:30.650688   75908 fix.go:216] guest clock: 1724869350.622150588
	I0828 18:22:30.650699   75908 fix.go:229] Guest: 2024-08-28 18:22:30.622150588 +0000 UTC Remote: 2024-08-28 18:22:30.545207555 +0000 UTC m=+354.015941485 (delta=76.943033ms)
	I0828 18:22:30.650723   75908 fix.go:200] guest clock delta is within tolerance: 76.943033ms
	I0828 18:22:30.650741   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 18.660017717s
	I0828 18:22:30.650770   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.651011   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:30.653715   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654110   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.654150   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654274   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.654882   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655093   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655173   75908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:30.655235   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.655319   75908 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:30.655339   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.658052   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658097   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658440   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658470   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658507   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658520   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658677   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658804   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658899   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659098   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659131   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659272   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659276   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.659426   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.769716   75908 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:30.775522   75908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:30.918471   75908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:30.924338   75908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:30.924416   75908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:30.939462   75908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:30.939489   75908 start.go:495] detecting cgroup driver to use...
	I0828 18:22:30.939589   75908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:30.956324   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:30.970243   75908 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:30.970319   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:30.983636   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:30.996989   75908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:31.116994   75908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:31.290216   75908 docker.go:233] disabling docker service ...
	I0828 18:22:31.290291   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:31.305578   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:31.318402   75908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:31.446431   75908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:31.570180   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:31.583862   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:31.602513   75908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:22:31.602577   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.613726   75908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:31.613798   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.627405   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.638648   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.648905   75908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:31.660365   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.670925   75908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.689052   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.699345   75908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:31.708691   75908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:31.708753   75908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:31.721500   75908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:31.730798   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:31.858773   75908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:31.945345   75908 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:31.945419   75908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:31.949720   75908 start.go:563] Will wait 60s for crictl version
	I0828 18:22:31.949784   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:31.953193   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:31.990360   75908 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:31.990440   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.019756   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.048117   75908 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:22:29.744207   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.243511   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.249883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:30.749652   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.249973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.249415   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.749545   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.249768   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.749104   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.249819   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.749727   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.901907   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:34.907432   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.049494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:32.052227   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052548   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:32.052585   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052800   75908 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:32.056788   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:32.068700   75908 kubeadm.go:883] updating cluster {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:32.068814   75908 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:22:32.068847   75908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:32.103085   75908 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:22:32.103111   75908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:32.103153   75908 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.103194   75908 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.103240   75908 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.103260   75908 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.103331   75908 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.103379   75908 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.103433   75908 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.103242   75908 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104775   75908 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.104806   75908 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.104829   75908 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.104777   75908 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.104781   75908 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.343173   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0828 18:22:32.343209   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.409616   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.418908   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.447831   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.453065   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.453813   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.494045   75908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0828 18:22:32.494090   75908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0828 18:22:32.494121   75908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.494122   75908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.494157   75908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0828 18:22:32.494168   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494169   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494179   75908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.494209   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546592   75908 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0828 18:22:32.546634   75908 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.546655   75908 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0828 18:22:32.546682   75908 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.546698   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546724   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546807   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.546829   75908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0828 18:22:32.546849   75908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.546880   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.546891   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546910   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.557550   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.593306   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.593328   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.648848   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.648913   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.648922   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.648973   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.704513   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.717712   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.779954   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.780015   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.780080   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.780148   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.814614   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.821580   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0828 18:22:32.821660   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.901464   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0828 18:22:32.901584   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:32.905004   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0828 18:22:32.905036   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0828 18:22:32.905102   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:32.905103   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0828 18:22:32.905144   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0828 18:22:32.905160   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905190   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905105   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:32.905191   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:32.905205   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.907869   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0828 18:22:33.324215   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292175   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.386961854s)
	I0828 18:22:35.292205   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0828 18:22:35.292234   75908 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292245   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.387114296s)
	I0828 18:22:35.292273   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0828 18:22:35.292301   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292314   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.386985678s)
	I0828 18:22:35.292354   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0828 18:22:35.292358   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.387036145s)
	I0828 18:22:35.292367   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.387143897s)
	I0828 18:22:35.292375   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0828 18:22:35.292385   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0828 18:22:35.292409   75908 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.968164241s)
	I0828 18:22:35.292446   75908 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0828 18:22:35.292456   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:35.292479   75908 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292536   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:34.243832   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:36.744323   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:35.249587   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:35.749826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.249647   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.749792   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.249845   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.249577   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.749412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.249047   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.749564   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.402943   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:39.901715   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:37.064442   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.772111922s)
	I0828 18:22:37.064476   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0828 18:22:37.064498   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.064500   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.772021571s)
	I0828 18:22:37.064529   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0828 18:22:37.064536   75908 ssh_runner.go:235] Completed: which crictl: (1.771982077s)
	I0828 18:22:37.064603   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:37.064550   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.121169   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933342   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.868675318s)
	I0828 18:22:38.933379   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0828 18:22:38.933390   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.812184072s)
	I0828 18:22:38.933486   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933400   75908 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.933543   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.983461   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0828 18:22:38.983579   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:39.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:41.243732   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:40.249307   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:40.749120   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.249107   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.749895   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.249941   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.748952   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.249788   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.749898   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.249654   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.749350   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.903470   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:44.403257   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:42.534353   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.550744503s)
	I0828 18:22:42.534392   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0828 18:22:42.534430   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600866705s)
	I0828 18:22:42.534448   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0828 18:22:42.534472   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:42.534521   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:44.602703   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.068154029s)
	I0828 18:22:44.602738   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0828 18:22:44.602765   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:44.602809   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:45.948751   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.345914789s)
	I0828 18:22:45.948794   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0828 18:22:45.948821   75908 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:45.948874   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:43.742979   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.743892   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:47.745070   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.249353   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:45.749091   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.249897   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.748991   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.249385   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.749204   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.248962   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.749853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.249574   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.749028   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.403322   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:48.902485   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:46.594343   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0828 18:22:46.594405   75908 cache_images.go:123] Successfully loaded all cached images
	I0828 18:22:46.594413   75908 cache_images.go:92] duration metric: took 14.491290737s to LoadCachedImages
	I0828 18:22:46.594428   75908 kubeadm.go:934] updating node { 192.168.61.138 8443 v1.31.0 crio true true} ...
	I0828 18:22:46.594562   75908 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-072854 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:46.594627   75908 ssh_runner.go:195] Run: crio config
	I0828 18:22:46.641210   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:46.641230   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:46.641240   75908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:46.641260   75908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-072854 NodeName:no-preload-072854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:22:46.641417   75908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-072854"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:46.641507   75908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:22:46.653042   75908 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:46.653110   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:46.671775   75908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0828 18:22:46.691485   75908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:46.707525   75908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0828 18:22:46.723642   75908 ssh_runner.go:195] Run: grep 192.168.61.138	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:46.727148   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:46.738598   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:46.877354   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:46.896287   75908 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854 for IP: 192.168.61.138
	I0828 18:22:46.896309   75908 certs.go:194] generating shared ca certs ...
	I0828 18:22:46.896324   75908 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:46.896488   75908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:46.896543   75908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:46.896578   75908 certs.go:256] generating profile certs ...
	I0828 18:22:46.896694   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/client.key
	I0828 18:22:46.896777   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key.f9122682
	I0828 18:22:46.896833   75908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key
	I0828 18:22:46.896945   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:46.896975   75908 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:46.896984   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:46.897006   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:46.897028   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:46.897050   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:46.897086   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:46.897777   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:46.940603   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:46.971255   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:47.009269   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:47.043849   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 18:22:47.081562   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 18:22:47.104248   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:47.127680   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 18:22:47.150718   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:47.171449   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:47.192814   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:47.213607   75908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:47.229589   75908 ssh_runner.go:195] Run: openssl version
	I0828 18:22:47.235107   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:47.245976   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250512   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250568   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.256305   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:47.267080   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:47.276961   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281311   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281388   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.286823   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:47.298010   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:47.309303   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313555   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313604   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.319146   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:47.329851   75908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:47.333891   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:47.339544   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:47.344883   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:47.350419   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:47.355560   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:47.360987   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:47.366392   75908 kubeadm.go:392] StartCluster: {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:47.366472   75908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:47.366518   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.407218   75908 cri.go:89] found id: ""
	I0828 18:22:47.407283   75908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:47.418518   75908 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:47.418541   75908 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:47.418599   75908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:47.429592   75908 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:47.430649   75908 kubeconfig.go:125] found "no-preload-072854" server: "https://192.168.61.138:8443"
	I0828 18:22:47.432727   75908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:47.443042   75908 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.138
	I0828 18:22:47.443072   75908 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:47.443084   75908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:47.443132   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.483840   75908 cri.go:89] found id: ""
	I0828 18:22:47.483906   75908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:47.499558   75908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:47.508932   75908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:47.508954   75908 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:47.508998   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:47.519003   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:47.519082   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:47.528248   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:47.536682   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:47.536744   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:47.545411   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.553945   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:47.554005   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.562837   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:47.571080   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:47.571141   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:47.579788   75908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:47.590221   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:47.707814   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.459935   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.669459   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.772934   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.886910   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:48.887010   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.387963   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.887167   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.923097   75908 api_server.go:72] duration metric: took 1.036200671s to wait for apiserver process to appear ...
	I0828 18:22:49.923147   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:49.923182   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:50.244153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.245033   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.835389   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:52.835424   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:52.835439   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.938497   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.938528   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:52.938541   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.943233   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.943256   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.423531   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.428654   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.428675   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.924251   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.963729   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.963759   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:54.423241   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:54.430345   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:22:54.436835   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:54.436858   75908 api_server.go:131] duration metric: took 4.513702157s to wait for apiserver health ...
	I0828 18:22:54.436867   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:54.436873   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:54.438482   75908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:50.249726   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:50.749045   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.249609   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.749060   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.249827   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.748985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.248958   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.748960   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.249581   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.749175   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.404355   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:53.904030   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:54.439656   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:54.453060   75908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:54.473537   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:54.489302   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:54.489340   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:54.489352   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:54.489369   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:54.489380   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:54.489392   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:54.489404   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:54.489414   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:54.489425   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:54.489434   75908 system_pods.go:74] duration metric: took 15.875803ms to wait for pod list to return data ...
	I0828 18:22:54.489446   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:54.494398   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:54.494428   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:54.494441   75908 node_conditions.go:105] duration metric: took 4.987547ms to run NodePressure ...
	I0828 18:22:54.494462   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:54.766427   75908 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771542   75908 kubeadm.go:739] kubelet initialised
	I0828 18:22:54.771571   75908 kubeadm.go:740] duration metric: took 5.116897ms waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771582   75908 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:54.777783   75908 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.787163   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787193   75908 pod_ready.go:82] duration metric: took 9.382038ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.787205   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787215   75908 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.791786   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791810   75908 pod_ready.go:82] duration metric: took 4.586002ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.791818   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791826   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.796201   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796220   75908 pod_ready.go:82] duration metric: took 4.388906ms for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.796228   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796234   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.877071   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877104   75908 pod_ready.go:82] duration metric: took 80.86176ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.877118   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877127   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.277179   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277206   75908 pod_ready.go:82] duration metric: took 400.069901ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.277215   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277223   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.676857   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676887   75908 pod_ready.go:82] duration metric: took 399.658558ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.676898   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676904   75908 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:56.077491   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077525   75908 pod_ready.go:82] duration metric: took 400.610612ms for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:56.077535   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077543   75908 pod_ready.go:39] duration metric: took 1.305948645s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:56.077559   75908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:56.090851   75908 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:56.090878   75908 kubeadm.go:597] duration metric: took 8.672328864s to restartPrimaryControlPlane
	I0828 18:22:56.090889   75908 kubeadm.go:394] duration metric: took 8.724501209s to StartCluster
	I0828 18:22:56.090909   75908 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.090980   75908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:56.092859   75908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.093177   75908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:56.093304   75908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:56.093391   75908 addons.go:69] Setting storage-provisioner=true in profile "no-preload-072854"
	I0828 18:22:56.093386   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:56.093415   75908 addons.go:69] Setting default-storageclass=true in profile "no-preload-072854"
	I0828 18:22:56.093472   75908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-072854"
	I0828 18:22:56.093457   75908 addons.go:69] Setting metrics-server=true in profile "no-preload-072854"
	I0828 18:22:56.093501   75908 addons.go:234] Setting addon metrics-server=true in "no-preload-072854"
	I0828 18:22:56.093429   75908 addons.go:234] Setting addon storage-provisioner=true in "no-preload-072854"
	W0828 18:22:56.093516   75908 addons.go:243] addon metrics-server should already be in state true
	W0828 18:22:56.093518   75908 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093869   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093904   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093994   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.094069   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.094796   75908 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:56.096268   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:56.110476   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0828 18:22:56.110685   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0828 18:22:56.110791   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0828 18:22:56.111030   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111183   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111453   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111592   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111603   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111710   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111720   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111820   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111839   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111892   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112043   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112214   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112402   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.112440   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112474   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.112669   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112711   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.115984   75908 addons.go:234] Setting addon default-storageclass=true in "no-preload-072854"
	W0828 18:22:56.116000   75908 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:56.116020   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.116245   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.116280   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.127848   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35747
	I0828 18:22:56.134902   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.135863   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.135892   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.136351   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.136536   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.138800   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.140837   75908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:56.142271   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:56.142290   75908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:56.142311   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.145770   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146271   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.146332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146572   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.146787   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.146958   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.147097   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.158402   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I0828 18:22:56.158948   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.159531   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.159555   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.159622   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0828 18:22:56.160033   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.160108   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.160578   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.160608   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.160864   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.160876   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.161318   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.161543   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.163449   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.165347   75908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:56.166532   75908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.166547   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:56.166564   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.170058   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170510   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.170536   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170718   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.170900   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.171055   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.171193   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.177056   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I0828 18:22:56.177458   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.177969   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.178001   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.178335   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.178537   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.180056   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.180261   75908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.180274   75908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:56.180288   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.182971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183550   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.183576   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183726   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.183879   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.184042   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.184212   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.333329   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:56.363605   75908 node_ready.go:35] waiting up to 6m0s for node "no-preload-072854" to be "Ready" ...
	I0828 18:22:56.444569   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:56.444591   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:56.466266   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:56.466288   75908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:56.472695   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.494468   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:56.494496   75908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:56.499713   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.549699   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:57.391629   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391655   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.391634   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391724   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392046   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392063   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392072   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392068   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392080   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392108   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392046   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392127   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392144   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392152   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392322   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392336   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.393780   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.393802   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.393846   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.397916   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.397937   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.398164   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.398183   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.398202   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520056   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520082   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520358   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520373   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520392   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520435   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520458   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520699   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520714   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520725   75908 addons.go:475] Verifying addon metrics-server=true in "no-preload-072854"
	I0828 18:22:57.522537   75908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:54.742708   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:56.744595   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:55.248933   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:55.749502   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.249976   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.749648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.249544   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.749769   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.249492   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.749787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.249693   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.749781   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.402039   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:58.901738   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:57.523745   75908 addons.go:510] duration metric: took 1.430442724s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:58.367342   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:00.867911   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:59.243496   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:01.244209   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:00.249249   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.749724   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.248973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.748932   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.249474   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.749966   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.249404   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.248943   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.749828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.902675   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:03.402001   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:02.868286   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:03.367260   75908 node_ready.go:49] node "no-preload-072854" has status "Ready":"True"
	I0828 18:23:03.367286   75908 node_ready.go:38] duration metric: took 7.003649083s for node "no-preload-072854" to be "Ready" ...
	I0828 18:23:03.367296   75908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:23:03.372211   75908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376919   75908 pod_ready.go:93] pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.376944   75908 pod_ready.go:82] duration metric: took 4.710919ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376954   75908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381043   75908 pod_ready.go:93] pod "etcd-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.381066   75908 pod_ready.go:82] duration metric: took 4.10571ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381078   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:05.388413   75908 pod_ready.go:103] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.387040   75908 pod_ready.go:93] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.387060   75908 pod_ready.go:82] duration metric: took 3.005974723s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.387070   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391257   75908 pod_ready.go:93] pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.391276   75908 pod_ready.go:82] duration metric: took 4.19923ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391285   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396819   75908 pod_ready.go:93] pod "kube-proxy-tfxfd" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.396836   75908 pod_ready.go:82] duration metric: took 5.545346ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396845   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.743752   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.242657   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.243781   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:05.249882   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.749888   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.249648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.749518   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.249032   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.249738   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.749748   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.249670   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.749246   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.906344   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.401488   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.402915   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.568922   75908 pod_ready.go:93] pod "kube-scheduler-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.568948   75908 pod_ready.go:82] duration metric: took 172.096644ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.568964   75908 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:08.574813   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.576583   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.743641   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.243152   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.249340   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:10.749798   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.249721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.249779   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.249760   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.749029   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.249441   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.749641   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.903188   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.401514   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.076559   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.575593   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.742772   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.743273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.249678   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:15.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.249786   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.748968   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.249139   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.749721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.249749   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.749731   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.249576   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.749644   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.402418   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.902446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.575692   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.576073   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.744432   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.243417   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:20.249682   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:20.748965   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.249378   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.749011   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:21.749077   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:21.783557   77396 cri.go:89] found id: ""
	I0828 18:23:21.783581   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.783592   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:21.783600   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:21.783667   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:21.816332   77396 cri.go:89] found id: ""
	I0828 18:23:21.816366   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.816377   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:21.816385   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:21.816451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:21.850130   77396 cri.go:89] found id: ""
	I0828 18:23:21.850157   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.850168   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:21.850175   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:21.850240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:21.887000   77396 cri.go:89] found id: ""
	I0828 18:23:21.887028   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.887037   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:21.887045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:21.887106   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:21.922052   77396 cri.go:89] found id: ""
	I0828 18:23:21.922095   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.922106   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:21.922114   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:21.922169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:21.968838   77396 cri.go:89] found id: ""
	I0828 18:23:21.968865   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.968872   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:21.968879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:21.968937   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:22.005361   77396 cri.go:89] found id: ""
	I0828 18:23:22.005387   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.005397   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:22.005404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:22.005465   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:22.043999   77396 cri.go:89] found id: ""
	I0828 18:23:22.044026   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.044034   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:22.044042   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:22.044054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:22.092612   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:22.092641   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:22.105847   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:22.105870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:22.230236   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:22.230254   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:22.230267   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:22.305648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:22.305712   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:24.843524   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:24.856321   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:24.856412   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:24.891356   77396 cri.go:89] found id: ""
	I0828 18:23:24.891395   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.891406   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:24.891414   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:24.891476   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:24.923476   77396 cri.go:89] found id: ""
	I0828 18:23:24.923504   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.923515   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:24.923522   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:24.923583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:24.955453   77396 cri.go:89] found id: ""
	I0828 18:23:24.955482   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.955493   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:24.955499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:24.955564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:24.991349   77396 cri.go:89] found id: ""
	I0828 18:23:24.991377   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.991384   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:24.991394   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:24.991448   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:25.026464   77396 cri.go:89] found id: ""
	I0828 18:23:25.026493   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.026501   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:25.026508   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:25.026559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:25.066989   77396 cri.go:89] found id: ""
	I0828 18:23:25.067021   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.067045   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:25.067053   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:25.067123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:25.111327   77396 cri.go:89] found id: ""
	I0828 18:23:25.111358   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.111369   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:25.111377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:25.111442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:25.159672   77396 cri.go:89] found id: ""
	I0828 18:23:25.159698   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.159707   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:25.159715   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:25.159726   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:21.902745   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.075480   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.575344   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.743311   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.743442   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:25.216755   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:25.216788   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:25.230365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:25.230399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:25.303227   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:25.303253   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:25.303276   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:25.378467   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:25.378501   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:27.915420   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:27.927659   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:27.927726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:27.961535   77396 cri.go:89] found id: ""
	I0828 18:23:27.961560   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.961568   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:27.961573   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:27.961618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:27.993707   77396 cri.go:89] found id: ""
	I0828 18:23:27.993732   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.993739   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:27.993745   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:27.993792   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:28.027410   77396 cri.go:89] found id: ""
	I0828 18:23:28.027438   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.027445   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:28.027451   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:28.027509   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:28.063874   77396 cri.go:89] found id: ""
	I0828 18:23:28.063909   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.063918   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:28.063924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:28.063974   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:28.096726   77396 cri.go:89] found id: ""
	I0828 18:23:28.096755   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.096763   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:28.096769   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:28.096826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:28.129538   77396 cri.go:89] found id: ""
	I0828 18:23:28.129562   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.129570   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:28.129576   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:28.129633   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:28.167785   77396 cri.go:89] found id: ""
	I0828 18:23:28.167813   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.167821   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:28.167827   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:28.167881   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:28.200417   77396 cri.go:89] found id: ""
	I0828 18:23:28.200445   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.200456   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:28.200467   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:28.200481   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:28.214025   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:28.214054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:28.280106   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:28.280126   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:28.280139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:28.359834   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:28.359875   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:28.399997   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:28.400028   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:26.902287   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.403446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.576035   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.075134   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.080674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:28.744552   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.243825   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:30.950870   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:30.967367   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:30.967426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:31.007843   77396 cri.go:89] found id: ""
	I0828 18:23:31.007873   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.007882   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:31.007890   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:31.007949   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:31.056710   77396 cri.go:89] found id: ""
	I0828 18:23:31.056744   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.056756   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:31.056764   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:31.056824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:31.101177   77396 cri.go:89] found id: ""
	I0828 18:23:31.101208   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.101218   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:31.101225   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:31.101283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:31.135513   77396 cri.go:89] found id: ""
	I0828 18:23:31.135548   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.135560   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:31.135568   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:31.135635   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:31.172887   77396 cri.go:89] found id: ""
	I0828 18:23:31.172921   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.172932   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:31.172939   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:31.173006   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:31.207744   77396 cri.go:89] found id: ""
	I0828 18:23:31.207775   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.207788   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:31.207795   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:31.207873   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:31.242954   77396 cri.go:89] found id: ""
	I0828 18:23:31.242984   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.242995   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:31.243003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:31.243063   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:31.277382   77396 cri.go:89] found id: ""
	I0828 18:23:31.277418   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.277427   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:31.277436   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:31.277448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.315688   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:31.315722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:31.367565   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:31.367596   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:31.380803   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:31.380839   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:31.447184   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:31.447214   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:31.447229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.022521   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:34.036551   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:34.036615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:34.074735   77396 cri.go:89] found id: ""
	I0828 18:23:34.074763   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.074772   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:34.074780   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:34.074836   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:34.113604   77396 cri.go:89] found id: ""
	I0828 18:23:34.113631   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.113642   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:34.113649   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:34.113711   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:34.152658   77396 cri.go:89] found id: ""
	I0828 18:23:34.152687   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.152701   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:34.152707   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:34.152753   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:34.188748   77396 cri.go:89] found id: ""
	I0828 18:23:34.188775   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.188784   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:34.188789   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:34.188847   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:34.221553   77396 cri.go:89] found id: ""
	I0828 18:23:34.221584   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.221595   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:34.221602   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:34.221666   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:34.257809   77396 cri.go:89] found id: ""
	I0828 18:23:34.257833   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.257843   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:34.257850   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:34.257935   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:34.291217   77396 cri.go:89] found id: ""
	I0828 18:23:34.291246   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.291253   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:34.291261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:34.291327   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:34.324084   77396 cri.go:89] found id: ""
	I0828 18:23:34.324114   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.324122   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:34.324133   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:34.324147   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:34.373802   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:34.373838   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:34.386779   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:34.386807   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:34.457396   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:34.457413   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:34.457428   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.531549   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:34.531590   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.901633   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:34.402475   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.576038   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:36.075226   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:35.743297   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.744669   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.068985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:37.083317   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:37.083383   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:37.117109   77396 cri.go:89] found id: ""
	I0828 18:23:37.117144   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.117156   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:37.117164   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:37.117225   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:37.150151   77396 cri.go:89] found id: ""
	I0828 18:23:37.150180   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.150189   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:37.150194   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:37.150249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:37.184263   77396 cri.go:89] found id: ""
	I0828 18:23:37.184289   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.184298   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:37.184303   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:37.184358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:37.214442   77396 cri.go:89] found id: ""
	I0828 18:23:37.214468   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.214476   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:37.214481   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:37.214545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:37.251690   77396 cri.go:89] found id: ""
	I0828 18:23:37.251723   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.251732   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:37.251738   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:37.251790   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:37.286900   77396 cri.go:89] found id: ""
	I0828 18:23:37.286929   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.286939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:37.286946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:37.287026   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:37.324010   77396 cri.go:89] found id: ""
	I0828 18:23:37.324039   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.324049   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:37.324057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:37.324114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:37.359723   77396 cri.go:89] found id: ""
	I0828 18:23:37.359777   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.359785   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:37.359813   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:37.359829   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:37.411363   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:37.411395   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:37.425078   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:37.425108   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:37.498351   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:37.498374   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:37.498399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:37.580149   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:37.580187   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:40.119822   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:40.134555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:40.134613   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:40.173129   77396 cri.go:89] found id: ""
	I0828 18:23:40.173156   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.173164   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:40.173170   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:40.173218   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:36.902004   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:39.401256   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:38.575639   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.575835   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.243909   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.743492   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.205445   77396 cri.go:89] found id: ""
	I0828 18:23:40.205470   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.205477   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:40.205482   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:40.205536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:40.237018   77396 cri.go:89] found id: ""
	I0828 18:23:40.237046   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.237057   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:40.237064   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:40.237124   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:40.271188   77396 cri.go:89] found id: ""
	I0828 18:23:40.271220   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.271232   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:40.271239   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:40.271302   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:40.304532   77396 cri.go:89] found id: ""
	I0828 18:23:40.304566   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.304577   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:40.304585   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:40.304652   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:40.338114   77396 cri.go:89] found id: ""
	I0828 18:23:40.338145   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.338156   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:40.338165   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:40.338227   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:40.370126   77396 cri.go:89] found id: ""
	I0828 18:23:40.370160   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.370176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:40.370184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:40.370247   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:40.406139   77396 cri.go:89] found id: ""
	I0828 18:23:40.406167   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.406176   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:40.406186   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:40.406201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:40.459364   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:40.459404   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:40.472467   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:40.472496   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:40.546389   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:40.546420   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:40.546438   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:40.628550   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:40.628586   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:43.170210   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:43.183441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:43.183516   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:43.215798   77396 cri.go:89] found id: ""
	I0828 18:23:43.215823   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.215834   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:43.215841   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:43.215905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:43.250001   77396 cri.go:89] found id: ""
	I0828 18:23:43.250027   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.250035   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:43.250041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:43.250110   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:43.284621   77396 cri.go:89] found id: ""
	I0828 18:23:43.284654   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.284662   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:43.284668   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:43.284716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:43.318780   77396 cri.go:89] found id: ""
	I0828 18:23:43.318805   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.318815   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:43.318821   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:43.318866   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:43.351788   77396 cri.go:89] found id: ""
	I0828 18:23:43.351810   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.351818   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:43.351823   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:43.351872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:43.388719   77396 cri.go:89] found id: ""
	I0828 18:23:43.388745   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.388755   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:43.388761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:43.388810   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:43.423250   77396 cri.go:89] found id: ""
	I0828 18:23:43.423273   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.423283   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:43.423290   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:43.423376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:43.464644   77396 cri.go:89] found id: ""
	I0828 18:23:43.464672   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.464683   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:43.464693   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:43.464708   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:43.517422   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:43.517457   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:43.530317   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:43.530342   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:43.599776   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:43.599795   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:43.599806   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:43.679377   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:43.679409   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:41.401619   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:43.403142   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.576264   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.076333   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.242626   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.243310   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:46.215985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:46.229564   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:46.229632   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:46.267425   77396 cri.go:89] found id: ""
	I0828 18:23:46.267453   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.267464   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:46.267472   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:46.267534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:46.302532   77396 cri.go:89] found id: ""
	I0828 18:23:46.302562   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.302573   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:46.302580   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:46.302645   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:46.338197   77396 cri.go:89] found id: ""
	I0828 18:23:46.338226   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.338237   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:46.338244   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:46.338305   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:46.371503   77396 cri.go:89] found id: ""
	I0828 18:23:46.371528   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.371535   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:46.371542   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:46.371606   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:46.406364   77396 cri.go:89] found id: ""
	I0828 18:23:46.406386   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.406399   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:46.406405   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:46.406451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:46.441519   77396 cri.go:89] found id: ""
	I0828 18:23:46.441547   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.441557   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:46.441565   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:46.441626   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:46.475413   77396 cri.go:89] found id: ""
	I0828 18:23:46.475445   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.475455   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:46.475465   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:46.475531   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:46.508722   77396 cri.go:89] found id: ""
	I0828 18:23:46.508752   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.508762   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:46.508772   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:46.508790   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:46.564737   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:46.564776   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:46.578833   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:46.578860   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:46.649533   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:46.649554   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:46.649566   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:46.725738   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:46.725780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.263052   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:49.275342   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:49.275403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:49.310092   77396 cri.go:89] found id: ""
	I0828 18:23:49.310121   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.310131   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:49.310138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:49.310200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:49.347624   77396 cri.go:89] found id: ""
	I0828 18:23:49.347649   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.347657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:49.347662   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:49.347708   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:49.383801   77396 cri.go:89] found id: ""
	I0828 18:23:49.383827   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.383834   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:49.383840   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:49.383889   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:49.420443   77396 cri.go:89] found id: ""
	I0828 18:23:49.420470   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.420478   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:49.420484   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:49.420536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:49.452225   77396 cri.go:89] found id: ""
	I0828 18:23:49.452247   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.452255   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:49.452260   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:49.452306   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:49.486137   77396 cri.go:89] found id: ""
	I0828 18:23:49.486164   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.486172   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:49.486178   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:49.486224   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:49.519081   77396 cri.go:89] found id: ""
	I0828 18:23:49.519115   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.519126   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:49.519137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:49.519199   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:49.552903   77396 cri.go:89] found id: ""
	I0828 18:23:49.552932   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.552940   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:49.552948   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:49.552962   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:49.623963   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:49.624000   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:49.624023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:49.700684   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:49.700722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.738241   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:49.738265   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:49.786941   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:49.786976   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:45.901814   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.903106   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.905017   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.575690   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.576689   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.243535   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:51.243843   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:53.244097   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.300380   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:52.314281   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:52.314347   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:52.348497   77396 cri.go:89] found id: ""
	I0828 18:23:52.348522   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.348532   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:52.348539   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:52.348605   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:52.382060   77396 cri.go:89] found id: ""
	I0828 18:23:52.382107   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.382119   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:52.382127   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:52.382242   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:52.414306   77396 cri.go:89] found id: ""
	I0828 18:23:52.414335   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.414348   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:52.414356   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:52.414424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:52.448965   77396 cri.go:89] found id: ""
	I0828 18:23:52.448995   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.449005   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:52.449012   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:52.449079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:52.479102   77396 cri.go:89] found id: ""
	I0828 18:23:52.479129   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.479140   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:52.479148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:52.479213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:52.510025   77396 cri.go:89] found id: ""
	I0828 18:23:52.510051   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.510061   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:52.510068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:52.510171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:52.544472   77396 cri.go:89] found id: ""
	I0828 18:23:52.544501   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.544510   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:52.544517   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:52.544584   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:52.579962   77396 cri.go:89] found id: ""
	I0828 18:23:52.579986   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.579993   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:52.580000   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:52.580015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:52.631775   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:52.631809   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:52.645200   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:52.645230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:52.709318   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:52.709341   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:52.709355   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:52.788797   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:52.788834   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:52.402059   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.901750   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.075625   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.076533   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.743325   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.242726   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.324787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:55.338003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:55.338109   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:55.371733   77396 cri.go:89] found id: ""
	I0828 18:23:55.371757   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.371764   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:55.371770   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:55.371818   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:55.407922   77396 cri.go:89] found id: ""
	I0828 18:23:55.407944   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.407951   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:55.407957   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:55.408009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:55.443667   77396 cri.go:89] found id: ""
	I0828 18:23:55.443693   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.443700   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:55.443706   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:55.443761   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:55.478692   77396 cri.go:89] found id: ""
	I0828 18:23:55.478725   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.478735   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:55.478742   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:55.478804   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:55.512495   77396 cri.go:89] found id: ""
	I0828 18:23:55.512517   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.512525   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:55.512530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:55.512583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:55.546363   77396 cri.go:89] found id: ""
	I0828 18:23:55.546404   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.546415   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:55.546423   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:55.546478   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:55.579505   77396 cri.go:89] found id: ""
	I0828 18:23:55.579526   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.579533   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:55.579539   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:55.579588   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:55.610588   77396 cri.go:89] found id: ""
	I0828 18:23:55.610612   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.610628   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:55.610648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:55.610659   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:55.647289   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:55.647313   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:55.696660   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:55.696699   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:55.709215   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:55.709242   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:55.781755   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:55.781773   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:55.781786   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.359553   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:58.371960   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:58.372034   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:58.404455   77396 cri.go:89] found id: ""
	I0828 18:23:58.404481   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.404488   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:58.404494   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:58.404545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:58.436955   77396 cri.go:89] found id: ""
	I0828 18:23:58.436979   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.436989   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:58.436996   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:58.437055   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:58.467985   77396 cri.go:89] found id: ""
	I0828 18:23:58.468011   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.468021   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:58.468028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:58.468085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:58.500356   77396 cri.go:89] found id: ""
	I0828 18:23:58.500390   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.500398   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:58.500404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:58.500469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:58.538445   77396 cri.go:89] found id: ""
	I0828 18:23:58.538469   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.538477   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:58.538483   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:58.538541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:58.577827   77396 cri.go:89] found id: ""
	I0828 18:23:58.577851   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.577859   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:58.577867   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:58.577932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:58.611863   77396 cri.go:89] found id: ""
	I0828 18:23:58.611891   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.611902   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:58.611909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:58.611973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:58.646133   77396 cri.go:89] found id: ""
	I0828 18:23:58.646165   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.646175   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:58.646187   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:58.646204   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:58.659103   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:58.659134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:58.725271   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:58.725292   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:58.725310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.807171   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:58.807218   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:58.848245   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:58.848273   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:56.902329   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.902824   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:56.575727   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.576160   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.075851   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:00.243273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:02.247987   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.402171   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:01.415498   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:01.415574   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:01.449314   77396 cri.go:89] found id: ""
	I0828 18:24:01.449347   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.449355   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:01.449362   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:01.449425   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:01.485354   77396 cri.go:89] found id: ""
	I0828 18:24:01.485381   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.485388   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:01.485395   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:01.485439   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:01.518106   77396 cri.go:89] found id: ""
	I0828 18:24:01.518132   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.518139   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:01.518145   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:01.518191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:01.551298   77396 cri.go:89] found id: ""
	I0828 18:24:01.551329   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.551340   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:01.551348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:01.551406   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:01.587074   77396 cri.go:89] found id: ""
	I0828 18:24:01.587100   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.587107   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:01.587112   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:01.587158   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:01.619482   77396 cri.go:89] found id: ""
	I0828 18:24:01.619510   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.619518   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:01.619523   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:01.619575   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:01.651938   77396 cri.go:89] found id: ""
	I0828 18:24:01.651965   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.651972   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:01.651978   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:01.652039   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:01.685390   77396 cri.go:89] found id: ""
	I0828 18:24:01.685419   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.685429   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:01.685437   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:01.685448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.723631   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:01.723656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:01.777387   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:01.777422   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:01.793748   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:01.793781   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:01.857869   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:01.857901   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:01.857915   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.434883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:04.447876   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:04.447953   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:04.480730   77396 cri.go:89] found id: ""
	I0828 18:24:04.480762   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.480774   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:04.480781   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:04.480841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:04.514621   77396 cri.go:89] found id: ""
	I0828 18:24:04.514647   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.514657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:04.514664   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:04.514722   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:04.552044   77396 cri.go:89] found id: ""
	I0828 18:24:04.552071   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.552083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:04.552090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:04.552151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:04.587402   77396 cri.go:89] found id: ""
	I0828 18:24:04.587427   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.587440   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:04.587446   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:04.587506   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:04.619299   77396 cri.go:89] found id: ""
	I0828 18:24:04.619329   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.619337   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:04.619343   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:04.619393   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:04.659363   77396 cri.go:89] found id: ""
	I0828 18:24:04.659391   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.659399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:04.659408   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:04.659469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:04.691997   77396 cri.go:89] found id: ""
	I0828 18:24:04.692022   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.692030   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:04.692035   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:04.692089   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:04.725162   77396 cri.go:89] found id: ""
	I0828 18:24:04.725188   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.725196   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:04.725204   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:04.725215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:04.778072   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:04.778112   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:04.792571   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:04.792604   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:04.863074   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:04.863096   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:04.863107   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.958480   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:04.958516   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.401445   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.402916   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.575667   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:05.576444   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:04.744216   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.243680   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.498048   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:07.511286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:07.511350   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:07.554880   77396 cri.go:89] found id: ""
	I0828 18:24:07.554910   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.554921   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:07.554929   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:07.554990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:07.590593   77396 cri.go:89] found id: ""
	I0828 18:24:07.590621   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.590631   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:07.590641   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:07.590706   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:07.624067   77396 cri.go:89] found id: ""
	I0828 18:24:07.624096   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.624107   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:07.624113   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:07.624169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:07.657241   77396 cri.go:89] found id: ""
	I0828 18:24:07.657269   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.657277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:07.657282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:07.657341   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:07.702308   77396 cri.go:89] found id: ""
	I0828 18:24:07.702358   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.702368   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:07.702375   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:07.702438   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:07.736409   77396 cri.go:89] found id: ""
	I0828 18:24:07.736446   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.736454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:07.736459   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:07.736527   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:07.771001   77396 cri.go:89] found id: ""
	I0828 18:24:07.771029   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.771037   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:07.771043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:07.771090   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:07.807061   77396 cri.go:89] found id: ""
	I0828 18:24:07.807089   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.807099   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:07.807111   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:07.807125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:07.885254   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:07.885293   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:07.926920   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:07.926948   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:07.980485   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:07.980524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:07.994512   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:07.994545   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:08.071058   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:05.901817   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.902547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.402041   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.576656   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.077246   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:09.244155   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:11.743283   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.571233   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:10.586227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:10.586298   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:10.623971   77396 cri.go:89] found id: ""
	I0828 18:24:10.623997   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.624006   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:10.624014   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:10.624074   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:10.675472   77396 cri.go:89] found id: ""
	I0828 18:24:10.675506   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.675518   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:10.675526   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:10.675599   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:10.707885   77396 cri.go:89] found id: ""
	I0828 18:24:10.707913   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.707922   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:10.707931   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:10.707991   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:10.740896   77396 cri.go:89] found id: ""
	I0828 18:24:10.740924   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.740934   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:10.740942   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:10.741058   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:10.776125   77396 cri.go:89] found id: ""
	I0828 18:24:10.776155   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.776167   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:10.776174   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:10.776234   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:10.814024   77396 cri.go:89] found id: ""
	I0828 18:24:10.814053   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.814062   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:10.814068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:10.814132   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:10.851380   77396 cri.go:89] found id: ""
	I0828 18:24:10.851404   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.851412   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:10.851418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:10.851479   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:10.888162   77396 cri.go:89] found id: ""
	I0828 18:24:10.888193   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.888204   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:10.888215   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:10.888229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:10.938481   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:10.938520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:10.952841   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:10.952870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:11.020956   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:11.020982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:11.020997   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:11.101883   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:11.101920   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:13.642878   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:13.657098   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:13.657172   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:13.695651   77396 cri.go:89] found id: ""
	I0828 18:24:13.695686   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.695694   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:13.695699   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:13.695747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:13.732419   77396 cri.go:89] found id: ""
	I0828 18:24:13.732452   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.732465   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:13.732473   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:13.732523   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:13.770052   77396 cri.go:89] found id: ""
	I0828 18:24:13.770090   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.770099   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:13.770104   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:13.770157   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:13.807955   77396 cri.go:89] found id: ""
	I0828 18:24:13.807980   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.807988   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:13.807993   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:13.808045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:13.849535   77396 cri.go:89] found id: ""
	I0828 18:24:13.849559   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.849566   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:13.849571   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:13.849621   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:13.889078   77396 cri.go:89] found id: ""
	I0828 18:24:13.889105   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.889114   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:13.889122   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:13.889177   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:13.924998   77396 cri.go:89] found id: ""
	I0828 18:24:13.925030   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.925040   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:13.925046   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:13.925095   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:13.962794   77396 cri.go:89] found id: ""
	I0828 18:24:13.962824   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.962835   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:13.962843   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:13.962854   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:14.016213   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:14.016260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:14.030089   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:14.030119   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:14.101102   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:14.101121   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:14.101134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:14.179243   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:14.179283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:12.903671   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:15.401472   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:12.575572   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:14.575994   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:13.743881   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.243453   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.725412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:16.738387   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:16.738459   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:16.773934   77396 cri.go:89] found id: ""
	I0828 18:24:16.773960   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.773967   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:16.773973   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:16.774022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:16.807374   77396 cri.go:89] found id: ""
	I0828 18:24:16.807402   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.807412   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:16.807418   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:16.807468   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:16.841569   77396 cri.go:89] found id: ""
	I0828 18:24:16.841595   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.841605   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:16.841613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:16.841673   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:16.877225   77396 cri.go:89] found id: ""
	I0828 18:24:16.877247   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.877255   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:16.877261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:16.877321   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:16.911357   77396 cri.go:89] found id: ""
	I0828 18:24:16.911385   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.911395   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:16.911402   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:16.911458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:16.955061   77396 cri.go:89] found id: ""
	I0828 18:24:16.955087   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.955095   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:16.955103   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:16.955156   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:16.989851   77396 cri.go:89] found id: ""
	I0828 18:24:16.989887   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.989900   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:16.989906   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:16.989966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:17.023974   77396 cri.go:89] found id: ""
	I0828 18:24:17.024005   77396 logs.go:276] 0 containers: []
	W0828 18:24:17.024016   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:17.024024   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:17.024036   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:17.085245   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:17.085279   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:17.100181   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:17.100211   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:17.185406   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:17.185426   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:17.185437   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:17.266980   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:17.267020   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:19.808568   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:19.823365   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:19.823432   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:19.859428   77396 cri.go:89] found id: ""
	I0828 18:24:19.859451   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.859459   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:19.859464   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:19.859518   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:19.895152   77396 cri.go:89] found id: ""
	I0828 18:24:19.895176   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.895186   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:19.895202   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:19.895263   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:19.935775   77396 cri.go:89] found id: ""
	I0828 18:24:19.935806   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.935815   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:19.935828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:19.935893   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:19.969484   77396 cri.go:89] found id: ""
	I0828 18:24:19.969518   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.969528   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:19.969534   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:19.969615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:20.002893   77396 cri.go:89] found id: ""
	I0828 18:24:20.002935   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.002947   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:20.002955   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:20.003041   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:20.034641   77396 cri.go:89] found id: ""
	I0828 18:24:20.034668   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.034678   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:20.034686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:20.034750   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:20.064580   77396 cri.go:89] found id: ""
	I0828 18:24:20.064609   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.064620   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:20.064627   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:20.064710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:20.109306   77396 cri.go:89] found id: ""
	I0828 18:24:20.109348   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.109360   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:20.109371   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:20.109390   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:20.160179   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:20.160213   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:20.172953   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:20.172982   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:24:17.402222   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.402389   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:17.076219   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.575317   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:18.742920   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:21.243791   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:24:20.245855   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:20.245879   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:20.245894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:20.333372   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:20.333430   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:22.870985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:22.886333   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:22.886403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:22.923248   77396 cri.go:89] found id: ""
	I0828 18:24:22.923278   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.923290   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:22.923298   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:22.923362   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:22.961720   77396 cri.go:89] found id: ""
	I0828 18:24:22.961747   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.961758   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:22.961767   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:22.961826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:22.996416   77396 cri.go:89] found id: ""
	I0828 18:24:22.996451   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.996461   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:22.996469   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:22.996534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:23.031328   77396 cri.go:89] found id: ""
	I0828 18:24:23.031354   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.031365   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:23.031373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:23.031442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:23.062790   77396 cri.go:89] found id: ""
	I0828 18:24:23.062818   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.062828   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:23.062836   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:23.062900   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:23.095783   77396 cri.go:89] found id: ""
	I0828 18:24:23.095811   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.095822   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:23.095829   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:23.095887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:23.128950   77396 cri.go:89] found id: ""
	I0828 18:24:23.128976   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.128984   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:23.128989   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:23.129035   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:23.161040   77396 cri.go:89] found id: ""
	I0828 18:24:23.161070   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.161081   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:23.161093   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:23.161109   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:23.209200   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:23.209232   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:23.222326   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:23.222369   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:23.294157   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:23.294223   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:23.294235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:23.371364   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:23.371399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:21.902165   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.902593   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:22.075187   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:24.076034   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.743186   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.245507   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.248023   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:25.911853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:25.924909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:25.925042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:25.958257   77396 cri.go:89] found id: ""
	I0828 18:24:25.958286   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.958294   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:25.958300   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:25.958380   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:25.991284   77396 cri.go:89] found id: ""
	I0828 18:24:25.991312   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.991320   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:25.991325   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:25.991373   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:26.023932   77396 cri.go:89] found id: ""
	I0828 18:24:26.023963   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.023974   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:26.023981   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:26.024042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:26.055233   77396 cri.go:89] found id: ""
	I0828 18:24:26.055264   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.055274   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:26.055282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:26.055342   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:26.091307   77396 cri.go:89] found id: ""
	I0828 18:24:26.091334   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.091345   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:26.091353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:26.091403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:26.123887   77396 cri.go:89] found id: ""
	I0828 18:24:26.123919   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.123929   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:26.123943   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:26.124004   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:26.156028   77396 cri.go:89] found id: ""
	I0828 18:24:26.156055   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.156063   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:26.156068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:26.156129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:26.186952   77396 cri.go:89] found id: ""
	I0828 18:24:26.186981   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.186989   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:26.186998   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:26.187008   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:26.234021   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:26.234065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:26.249052   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:26.249079   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:26.323382   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:26.323406   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:26.323421   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:26.408279   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:26.408306   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:28.950242   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:28.964886   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:28.964973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:28.999657   77396 cri.go:89] found id: ""
	I0828 18:24:28.999686   77396 logs.go:276] 0 containers: []
	W0828 18:24:28.999695   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:28.999701   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:28.999759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:29.036649   77396 cri.go:89] found id: ""
	I0828 18:24:29.036682   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.036691   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:29.036697   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:29.036758   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:29.071048   77396 cri.go:89] found id: ""
	I0828 18:24:29.071073   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.071083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:29.071090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:29.071149   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:29.106377   77396 cri.go:89] found id: ""
	I0828 18:24:29.106412   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.106423   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:29.106430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:29.106494   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:29.141150   77396 cri.go:89] found id: ""
	I0828 18:24:29.141183   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.141192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:29.141198   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:29.141261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:29.175977   77396 cri.go:89] found id: ""
	I0828 18:24:29.176007   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.176015   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:29.176022   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:29.176085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:29.209684   77396 cri.go:89] found id: ""
	I0828 18:24:29.209714   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.209725   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:29.209732   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:29.209791   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:29.244105   77396 cri.go:89] found id: ""
	I0828 18:24:29.244133   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.244143   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:29.244153   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:29.244168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:29.304288   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:29.304326   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:29.319606   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:29.319636   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:29.389101   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:29.389123   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:29.389135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:29.474129   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:29.474168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:26.401494   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.402117   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.402503   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.574724   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.575806   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:31.075079   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.743295   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.743355   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.018867   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:32.032399   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:32.032467   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:32.066994   77396 cri.go:89] found id: ""
	I0828 18:24:32.067023   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.067032   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:32.067038   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:32.067094   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:32.102133   77396 cri.go:89] found id: ""
	I0828 18:24:32.102164   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.102176   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:32.102183   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:32.102237   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:32.136427   77396 cri.go:89] found id: ""
	I0828 18:24:32.136450   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.136457   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:32.136463   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:32.136514   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.169993   77396 cri.go:89] found id: ""
	I0828 18:24:32.170026   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.170034   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:32.170040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:32.170114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:32.202191   77396 cri.go:89] found id: ""
	I0828 18:24:32.202218   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.202229   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:32.202236   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:32.202297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:32.241866   77396 cri.go:89] found id: ""
	I0828 18:24:32.241890   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.241900   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:32.241908   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:32.241980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:32.275919   77396 cri.go:89] found id: ""
	I0828 18:24:32.275949   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.275965   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:32.275972   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:32.276033   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:32.310958   77396 cri.go:89] found id: ""
	I0828 18:24:32.310991   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.311002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:32.311010   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:32.311023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:32.367619   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:32.367665   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:32.380676   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:32.380707   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:32.445626   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:32.445650   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:32.445668   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:32.528458   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:32.528493   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:35.070182   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:35.084599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:35.084707   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:35.120542   77396 cri.go:89] found id: ""
	I0828 18:24:35.120568   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.120578   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:35.120585   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:35.120644   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:35.159336   77396 cri.go:89] found id: ""
	I0828 18:24:35.159361   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.159372   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:35.159378   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:35.159445   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:35.197161   77396 cri.go:89] found id: ""
	I0828 18:24:35.197185   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.197196   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:35.197203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:35.197267   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.903836   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.401184   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:33.574441   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.574602   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.244147   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.744307   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.233507   77396 cri.go:89] found id: ""
	I0828 18:24:35.233533   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.233542   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:35.233548   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:35.233609   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:35.270403   77396 cri.go:89] found id: ""
	I0828 18:24:35.270440   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.270448   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:35.270454   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:35.270503   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:35.304119   77396 cri.go:89] found id: ""
	I0828 18:24:35.304141   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.304149   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:35.304155   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:35.304223   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:35.341477   77396 cri.go:89] found id: ""
	I0828 18:24:35.341507   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.341518   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:35.341525   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:35.341589   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:35.374180   77396 cri.go:89] found id: ""
	I0828 18:24:35.374207   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.374215   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:35.374224   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:35.374235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:35.428008   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:35.428041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:35.443131   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:35.443159   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:35.515296   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:35.515318   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:35.515332   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:35.590734   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:35.590765   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.129856   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:38.143354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:38.143413   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:38.174964   77396 cri.go:89] found id: ""
	I0828 18:24:38.174993   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.175004   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:38.175011   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:38.175083   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:38.211424   77396 cri.go:89] found id: ""
	I0828 18:24:38.211460   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.211471   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:38.211477   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:38.211533   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:38.244667   77396 cri.go:89] found id: ""
	I0828 18:24:38.244697   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.244712   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:38.244719   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:38.244779   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:38.277930   77396 cri.go:89] found id: ""
	I0828 18:24:38.277955   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.277963   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:38.277969   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:38.278020   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:38.311374   77396 cri.go:89] found id: ""
	I0828 18:24:38.311403   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.311413   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:38.311420   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:38.311477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:38.345467   77396 cri.go:89] found id: ""
	I0828 18:24:38.345496   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.345507   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:38.345515   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:38.345576   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:38.377554   77396 cri.go:89] found id: ""
	I0828 18:24:38.377584   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.377595   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:38.377613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:38.377675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:38.410101   77396 cri.go:89] found id: ""
	I0828 18:24:38.410132   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.410142   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:38.410151   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:38.410165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:38.422496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:38.422523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:38.486692   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:38.486715   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:38.486728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:38.567295   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:38.567331   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.605787   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:38.605820   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:37.402128   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.902663   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.574935   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.575447   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:40.243971   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.743768   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:41.159454   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:41.172776   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:41.172845   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:41.205430   77396 cri.go:89] found id: ""
	I0828 18:24:41.205459   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.205470   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:41.205477   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:41.205541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:41.238941   77396 cri.go:89] found id: ""
	I0828 18:24:41.238968   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.238978   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:41.238985   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:41.239047   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:41.276056   77396 cri.go:89] found id: ""
	I0828 18:24:41.276079   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.276086   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:41.276092   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:41.276140   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:41.309018   77396 cri.go:89] found id: ""
	I0828 18:24:41.309043   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.309051   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:41.309057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:41.309103   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:41.343279   77396 cri.go:89] found id: ""
	I0828 18:24:41.343301   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.343309   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:41.343314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:41.343360   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:41.376723   77396 cri.go:89] found id: ""
	I0828 18:24:41.376749   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.376756   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:41.376762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:41.376811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:41.411996   77396 cri.go:89] found id: ""
	I0828 18:24:41.412023   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.412034   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:41.412040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:41.412091   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:41.445988   77396 cri.go:89] found id: ""
	I0828 18:24:41.446016   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.446026   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:41.446037   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:41.446053   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:41.498760   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:41.498799   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:41.512383   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:41.512413   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:41.582469   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:41.582493   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:41.582506   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:41.658801   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:41.658836   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.195154   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:44.207904   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:44.207978   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:44.241620   77396 cri.go:89] found id: ""
	I0828 18:24:44.241649   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.241659   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:44.241667   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:44.241726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:44.277206   77396 cri.go:89] found id: ""
	I0828 18:24:44.277238   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.277248   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:44.277254   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:44.277313   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:44.314367   77396 cri.go:89] found id: ""
	I0828 18:24:44.314397   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.314407   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:44.314415   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:44.314473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:44.356384   77396 cri.go:89] found id: ""
	I0828 18:24:44.356417   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.356429   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:44.356436   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:44.356499   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:44.388781   77396 cri.go:89] found id: ""
	I0828 18:24:44.388804   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.388812   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:44.388818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:44.388864   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:44.422896   77396 cri.go:89] found id: ""
	I0828 18:24:44.422927   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.422939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:44.422946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:44.423000   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:44.457218   77396 cri.go:89] found id: ""
	I0828 18:24:44.457242   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.457250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:44.457256   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:44.457315   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:44.489819   77396 cri.go:89] found id: ""
	I0828 18:24:44.489846   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.489854   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:44.489874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:44.489886   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.526759   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:44.526789   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:44.578813   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:44.578844   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:44.592066   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:44.592105   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:44.655504   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:44.655528   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:44.655547   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:42.401964   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.901869   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.076081   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.576010   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:45.242907   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.244400   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.240915   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:47.253259   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:47.253324   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:47.287911   77396 cri.go:89] found id: ""
	I0828 18:24:47.287939   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.287950   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:47.287958   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:47.288017   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:47.319834   77396 cri.go:89] found id: ""
	I0828 18:24:47.319863   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.319871   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:47.319877   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:47.319947   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:47.356339   77396 cri.go:89] found id: ""
	I0828 18:24:47.356370   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.356395   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:47.356403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:47.356481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:47.388621   77396 cri.go:89] found id: ""
	I0828 18:24:47.388646   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.388656   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:47.388663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:47.388713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:47.422495   77396 cri.go:89] found id: ""
	I0828 18:24:47.422527   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.422537   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:47.422545   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:47.422614   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:47.458799   77396 cri.go:89] found id: ""
	I0828 18:24:47.458825   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.458833   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:47.458839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:47.458885   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:47.496184   77396 cri.go:89] found id: ""
	I0828 18:24:47.496215   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.496226   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:47.496233   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:47.496286   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:47.536283   77396 cri.go:89] found id: ""
	I0828 18:24:47.536311   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.536322   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:47.536333   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:47.536347   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:47.588024   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:47.588056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:47.600661   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:47.600727   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:47.669096   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:47.669124   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:47.669139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:47.753696   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:47.753725   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:46.902404   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.402357   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:46.576078   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.075244   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.744421   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:52.243878   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:50.293600   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:50.306623   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:50.306715   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:50.340416   77396 cri.go:89] found id: ""
	I0828 18:24:50.340448   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.340460   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:50.340468   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:50.340534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:50.375812   77396 cri.go:89] found id: ""
	I0828 18:24:50.375843   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.375854   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:50.375861   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:50.375924   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:50.414399   77396 cri.go:89] found id: ""
	I0828 18:24:50.414426   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.414435   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:50.414444   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:50.414512   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:50.451285   77396 cri.go:89] found id: ""
	I0828 18:24:50.451316   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.451328   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:50.451336   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:50.451404   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:50.487828   77396 cri.go:89] found id: ""
	I0828 18:24:50.487852   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.487863   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:50.487871   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:50.487929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:50.520989   77396 cri.go:89] found id: ""
	I0828 18:24:50.521015   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.521023   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:50.521028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:50.521086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:50.553231   77396 cri.go:89] found id: ""
	I0828 18:24:50.553262   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.553271   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:50.553277   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:50.553332   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:50.588612   77396 cri.go:89] found id: ""
	I0828 18:24:50.588644   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.588654   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:50.588663   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:50.588674   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:50.642018   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:50.642065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:50.655887   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:50.655918   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:50.721935   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:50.721964   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:50.721980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:50.802009   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:50.802049   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:53.344650   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:53.357952   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:53.358011   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:53.393369   77396 cri.go:89] found id: ""
	I0828 18:24:53.393399   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.393408   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:53.393413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:53.393475   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:53.425918   77396 cri.go:89] found id: ""
	I0828 18:24:53.425947   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.425958   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:53.425965   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:53.426018   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:53.461827   77396 cri.go:89] found id: ""
	I0828 18:24:53.461857   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.461867   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:53.461874   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:53.461966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:53.494323   77396 cri.go:89] found id: ""
	I0828 18:24:53.494353   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.494363   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:53.494370   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:53.494430   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:53.531687   77396 cri.go:89] found id: ""
	I0828 18:24:53.531715   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.531726   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:53.531733   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:53.531789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:53.565794   77396 cri.go:89] found id: ""
	I0828 18:24:53.565819   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.565829   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:53.565838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:53.565894   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:53.601666   77396 cri.go:89] found id: ""
	I0828 18:24:53.601699   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.601710   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:53.601717   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:53.601782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:53.641268   77396 cri.go:89] found id: ""
	I0828 18:24:53.641302   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.641315   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:53.641332   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:53.641363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:53.695496   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:53.695532   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:53.708691   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:53.708722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:53.779280   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:53.779307   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:53.779320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:53.859258   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:53.859295   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:51.402746   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.403126   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:51.575165   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.575930   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:55.576188   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:54.243984   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.743976   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.403005   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:56.416305   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:56.416376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:56.448916   77396 cri.go:89] found id: ""
	I0828 18:24:56.448944   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.448955   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:56.448962   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:56.449022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:56.483870   77396 cri.go:89] found id: ""
	I0828 18:24:56.483897   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.483905   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:56.483910   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:56.483970   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:56.516615   77396 cri.go:89] found id: ""
	I0828 18:24:56.516642   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.516649   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:56.516655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:56.516712   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:56.551561   77396 cri.go:89] found id: ""
	I0828 18:24:56.551584   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.551591   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:56.551599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:56.551668   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:56.586089   77396 cri.go:89] found id: ""
	I0828 18:24:56.586120   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.586130   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:56.586138   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:56.586197   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:56.617988   77396 cri.go:89] found id: ""
	I0828 18:24:56.618018   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.618028   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:56.618034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:56.618111   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:56.664493   77396 cri.go:89] found id: ""
	I0828 18:24:56.664526   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.664535   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:56.664540   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:56.664601   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:56.698191   77396 cri.go:89] found id: ""
	I0828 18:24:56.698217   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.698228   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:56.698237   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:56.698251   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:56.747197   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:56.747225   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:56.760236   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:56.760262   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:56.831931   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:56.831955   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:56.831969   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:56.908578   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:56.908621   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:59.450148   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:59.464476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:59.464548   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:59.500934   77396 cri.go:89] found id: ""
	I0828 18:24:59.500956   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.500965   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:59.500970   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:59.501019   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:59.532711   77396 cri.go:89] found id: ""
	I0828 18:24:59.532740   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.532747   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:59.532753   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:59.532802   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:59.564974   77396 cri.go:89] found id: ""
	I0828 18:24:59.565001   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.565009   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:59.565016   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:59.565073   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:59.597924   77396 cri.go:89] found id: ""
	I0828 18:24:59.597957   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.597967   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:59.597975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:59.598030   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:59.630179   77396 cri.go:89] found id: ""
	I0828 18:24:59.630207   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.630216   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:59.630222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:59.630279   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:59.664755   77396 cri.go:89] found id: ""
	I0828 18:24:59.664783   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.664793   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:59.664800   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:59.664860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:59.701556   77396 cri.go:89] found id: ""
	I0828 18:24:59.701581   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.701590   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:59.701596   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:59.701646   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:59.733387   77396 cri.go:89] found id: ""
	I0828 18:24:59.733422   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.733430   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:59.733439   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:59.733450   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:59.780962   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:59.780994   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:59.795998   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:59.796034   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:59.864864   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:59.864886   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:59.864902   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:59.941914   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:59.941957   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:55.901611   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:57.902218   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.902364   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:58.076387   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:00.575268   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.243885   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:01.742980   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.480133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:02.492804   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:02.492863   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:02.525573   77396 cri.go:89] found id: ""
	I0828 18:25:02.525600   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.525609   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:02.525614   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:02.525675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:02.558640   77396 cri.go:89] found id: ""
	I0828 18:25:02.558670   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.558680   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:02.558687   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:02.558746   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:02.598803   77396 cri.go:89] found id: ""
	I0828 18:25:02.598838   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.598851   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:02.598860   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:02.598931   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:02.634067   77396 cri.go:89] found id: ""
	I0828 18:25:02.634110   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.634121   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:02.634128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:02.634188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:02.671495   77396 cri.go:89] found id: ""
	I0828 18:25:02.671520   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.671529   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:02.671536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:02.671595   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:02.704478   77396 cri.go:89] found id: ""
	I0828 18:25:02.704510   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.704522   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:02.704530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:02.704591   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:02.736799   77396 cri.go:89] found id: ""
	I0828 18:25:02.736831   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.736840   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:02.736846   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:02.736905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:02.770820   77396 cri.go:89] found id: ""
	I0828 18:25:02.770846   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.770856   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:02.770866   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:02.770885   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:02.848618   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:02.848645   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:02.848662   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:02.924704   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:02.924738   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:02.960776   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:02.960811   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:03.011600   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:03.011645   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:02.402547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:04.903615   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.576294   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.075828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:03.743629   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.744476   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:08.243316   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.527662   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:05.540652   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:05.540737   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:05.574620   77396 cri.go:89] found id: ""
	I0828 18:25:05.574650   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.574660   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:05.574668   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:05.574729   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:05.607594   77396 cri.go:89] found id: ""
	I0828 18:25:05.607621   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.607629   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:05.607634   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:05.607691   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:05.650792   77396 cri.go:89] found id: ""
	I0828 18:25:05.650823   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.650833   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:05.650841   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:05.650909   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:05.684453   77396 cri.go:89] found id: ""
	I0828 18:25:05.684481   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.684492   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:05.684499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:05.684564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:05.717875   77396 cri.go:89] found id: ""
	I0828 18:25:05.717904   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.717914   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:05.717921   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:05.717980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:05.754114   77396 cri.go:89] found id: ""
	I0828 18:25:05.754143   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.754155   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:05.754163   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:05.754220   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:05.786354   77396 cri.go:89] found id: ""
	I0828 18:25:05.786399   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.786411   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:05.786418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:05.786473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:05.818108   77396 cri.go:89] found id: ""
	I0828 18:25:05.818134   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.818141   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:05.818149   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:05.818164   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:05.868731   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:05.868762   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:05.882333   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:05.882360   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:05.951978   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:05.952003   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:05.952015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:06.028537   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:06.028573   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:08.567011   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:08.580607   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:08.580675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:08.613821   77396 cri.go:89] found id: ""
	I0828 18:25:08.613847   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.613858   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:08.613865   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:08.613929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:08.648994   77396 cri.go:89] found id: ""
	I0828 18:25:08.649021   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.649030   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:08.649036   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:08.649084   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:08.680804   77396 cri.go:89] found id: ""
	I0828 18:25:08.680829   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.680837   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:08.680844   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:08.680903   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:08.717926   77396 cri.go:89] found id: ""
	I0828 18:25:08.717962   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.717973   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:08.717980   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:08.718043   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:08.751928   77396 cri.go:89] found id: ""
	I0828 18:25:08.751957   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.751967   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:08.751975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:08.752037   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:08.791400   77396 cri.go:89] found id: ""
	I0828 18:25:08.791423   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.791432   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:08.791437   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:08.791497   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:08.828072   77396 cri.go:89] found id: ""
	I0828 18:25:08.828106   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.828118   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:08.828125   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:08.828190   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:08.881175   77396 cri.go:89] found id: ""
	I0828 18:25:08.881204   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.881216   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:08.881226   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:08.881241   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:08.970432   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:08.970469   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:09.006975   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:09.007002   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:09.059881   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:09.059919   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:09.073543   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:09.073567   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:09.143468   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:07.403012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.901414   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:07.075904   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.077674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:10.244567   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:12.742811   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.644356   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:11.657229   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:11.657297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:11.695036   77396 cri.go:89] found id: ""
	I0828 18:25:11.695059   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.695067   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:11.695073   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:11.695123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:11.726524   77396 cri.go:89] found id: ""
	I0828 18:25:11.726548   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.726556   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:11.726561   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:11.726608   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:11.759249   77396 cri.go:89] found id: ""
	I0828 18:25:11.759278   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.759289   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:11.759296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:11.759356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:11.794109   77396 cri.go:89] found id: ""
	I0828 18:25:11.794154   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.794163   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:11.794169   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:11.794221   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:11.828378   77396 cri.go:89] found id: ""
	I0828 18:25:11.828403   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.828411   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:11.828416   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:11.828470   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:11.864009   77396 cri.go:89] found id: ""
	I0828 18:25:11.864035   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.864043   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:11.864049   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:11.864108   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:11.895844   77396 cri.go:89] found id: ""
	I0828 18:25:11.895870   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.895878   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:11.895883   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:11.895932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:11.932149   77396 cri.go:89] found id: ""
	I0828 18:25:11.932180   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.932190   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:11.932208   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:11.932222   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:11.982478   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:11.982514   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:11.995466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:11.995498   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:12.058507   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:12.058531   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:12.058546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:12.138225   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:12.138260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:14.675970   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:14.688744   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:14.688811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:14.720771   77396 cri.go:89] found id: ""
	I0828 18:25:14.720795   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.720803   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:14.720808   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:14.720855   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:14.754047   77396 cri.go:89] found id: ""
	I0828 18:25:14.754071   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.754095   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:14.754103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:14.754159   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:14.789214   77396 cri.go:89] found id: ""
	I0828 18:25:14.789244   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.789256   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:14.789263   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:14.789331   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:14.822366   77396 cri.go:89] found id: ""
	I0828 18:25:14.822399   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.822411   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:14.822419   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:14.822489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:14.855905   77396 cri.go:89] found id: ""
	I0828 18:25:14.855932   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.855942   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:14.855949   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:14.856007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:14.889492   77396 cri.go:89] found id: ""
	I0828 18:25:14.889519   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.889529   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:14.889536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:14.889594   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:14.923892   77396 cri.go:89] found id: ""
	I0828 18:25:14.923921   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.923932   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:14.923940   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:14.923998   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:14.954979   77396 cri.go:89] found id: ""
	I0828 18:25:14.955002   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.955009   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:14.955017   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:14.955029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:15.006233   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:15.006266   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:15.019702   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:15.019729   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:15.090916   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:15.090943   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:15.090959   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:15.166150   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:15.166190   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:11.902996   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.402539   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.574819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:13.575405   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:16.074386   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.743486   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.243491   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.703473   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:17.716353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:17.716440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:17.750334   77396 cri.go:89] found id: ""
	I0828 18:25:17.750367   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.750376   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:17.750382   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:17.750440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:17.783429   77396 cri.go:89] found id: ""
	I0828 18:25:17.783475   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.783488   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:17.783496   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:17.783561   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:17.819014   77396 cri.go:89] found id: ""
	I0828 18:25:17.819041   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.819052   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:17.819060   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:17.819118   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:17.856138   77396 cri.go:89] found id: ""
	I0828 18:25:17.856168   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.856179   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:17.856186   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:17.856248   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:17.891579   77396 cri.go:89] found id: ""
	I0828 18:25:17.891611   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.891619   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:17.891626   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:17.891687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:17.924709   77396 cri.go:89] found id: ""
	I0828 18:25:17.924771   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.924798   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:17.924808   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:17.924874   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:17.955875   77396 cri.go:89] found id: ""
	I0828 18:25:17.955903   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.955913   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:17.955920   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:17.955977   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:17.993827   77396 cri.go:89] found id: ""
	I0828 18:25:17.993861   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.993872   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:17.993882   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:17.993897   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:18.046501   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:18.046534   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:18.060008   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:18.060040   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:18.128546   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:18.128567   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:18.128582   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:18.204859   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:18.204896   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:16.901986   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.902594   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.076564   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.575785   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:19.243545   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:21.244384   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.745360   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:20.759428   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:20.759511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:20.794748   77396 cri.go:89] found id: ""
	I0828 18:25:20.794780   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.794789   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:20.794794   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:20.794843   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:20.834595   77396 cri.go:89] found id: ""
	I0828 18:25:20.834623   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.834636   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:20.834642   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:20.834720   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:20.870609   77396 cri.go:89] found id: ""
	I0828 18:25:20.870636   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.870646   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:20.870653   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:20.870710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:20.903739   77396 cri.go:89] found id: ""
	I0828 18:25:20.903764   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.903774   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:20.903782   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:20.903841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:20.937331   77396 cri.go:89] found id: ""
	I0828 18:25:20.937360   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.937367   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:20.937373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:20.937424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:20.971140   77396 cri.go:89] found id: ""
	I0828 18:25:20.971169   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.971178   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:20.971184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:20.971231   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:21.002714   77396 cri.go:89] found id: ""
	I0828 18:25:21.002743   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.002753   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:21.002761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:21.002833   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:21.034802   77396 cri.go:89] found id: ""
	I0828 18:25:21.034827   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.034837   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:21.034848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:21.034862   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:21.091088   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:21.091128   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:21.103535   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:21.103569   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:21.177175   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:21.177202   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:21.177217   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:21.257125   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:21.257161   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:23.797074   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:23.810097   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:23.810171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:23.843943   77396 cri.go:89] found id: ""
	I0828 18:25:23.843972   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.843984   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:23.843991   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:23.844054   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:23.879872   77396 cri.go:89] found id: ""
	I0828 18:25:23.879906   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.879918   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:23.879926   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:23.879985   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:23.914109   77396 cri.go:89] found id: ""
	I0828 18:25:23.914136   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.914145   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:23.914153   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:23.914200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:23.952672   77396 cri.go:89] found id: ""
	I0828 18:25:23.952700   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.952708   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:23.952714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:23.952759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:23.986813   77396 cri.go:89] found id: ""
	I0828 18:25:23.986839   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.986855   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:23.986861   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:23.986917   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:24.019358   77396 cri.go:89] found id: ""
	I0828 18:25:24.019387   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.019396   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:24.019413   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:24.019487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:24.053389   77396 cri.go:89] found id: ""
	I0828 18:25:24.053415   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.053423   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:24.053429   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:24.053477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:24.086618   77396 cri.go:89] found id: ""
	I0828 18:25:24.086652   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.086660   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:24.086667   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:24.086677   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:24.136243   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:24.136277   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:24.150031   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:24.150071   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:24.229689   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:24.229729   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:24.229746   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:24.307152   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:24.307197   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:20.902691   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.401748   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:22.575828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.075159   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.743296   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.743656   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.243947   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:26.844828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:26.858915   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:26.858989   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:26.896094   77396 cri.go:89] found id: ""
	I0828 18:25:26.896123   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.896132   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:26.896138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:26.896187   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:26.934896   77396 cri.go:89] found id: ""
	I0828 18:25:26.934925   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.934936   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:26.934944   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:26.935007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:26.967673   77396 cri.go:89] found id: ""
	I0828 18:25:26.967700   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.967708   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:26.967714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:26.967780   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:27.000095   77396 cri.go:89] found id: ""
	I0828 18:25:27.000124   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.000133   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:27.000140   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:27.000192   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:27.038158   77396 cri.go:89] found id: ""
	I0828 18:25:27.038186   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.038195   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:27.038201   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:27.038253   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:27.073606   77396 cri.go:89] found id: ""
	I0828 18:25:27.073634   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.073649   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:27.073657   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:27.073713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:27.105139   77396 cri.go:89] found id: ""
	I0828 18:25:27.105163   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.105176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:27.105182   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:27.105235   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:27.137985   77396 cri.go:89] found id: ""
	I0828 18:25:27.138014   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.138025   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:27.138036   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:27.138055   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:27.187983   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:27.188018   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:27.200260   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:27.200286   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:27.273005   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:27.273026   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:27.273038   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:27.353333   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:27.353375   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:29.890515   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:29.903924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:29.903994   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:29.936189   77396 cri.go:89] found id: ""
	I0828 18:25:29.936221   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.936231   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:29.936240   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:29.936354   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:29.968319   77396 cri.go:89] found id: ""
	I0828 18:25:29.968349   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.968359   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:29.968366   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:29.968436   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:30.001331   77396 cri.go:89] found id: ""
	I0828 18:25:30.001358   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.001383   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:30.001391   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:30.001477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:30.035610   77396 cri.go:89] found id: ""
	I0828 18:25:30.035634   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.035642   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:30.035648   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:30.035695   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:30.067304   77396 cri.go:89] found id: ""
	I0828 18:25:30.067335   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.067346   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:30.067354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:30.067429   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:30.105020   77396 cri.go:89] found id: ""
	I0828 18:25:30.105049   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.105057   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:30.105063   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:30.105126   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:30.142048   77396 cri.go:89] found id: ""
	I0828 18:25:30.142097   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.142110   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:30.142117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:30.142180   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:30.173099   77396 cri.go:89] found id: ""
	I0828 18:25:30.173131   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.173140   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:30.173149   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:30.173166   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:25:25.901875   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.401339   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.402248   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:27.076181   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:29.575216   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.743526   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:33.242940   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:25:30.238946   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:30.238968   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:30.238980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:30.320484   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:30.320523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:30.360028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:30.360056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:30.412663   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:30.412697   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:32.927100   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:32.940555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:32.940636   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:32.973182   77396 cri.go:89] found id: ""
	I0828 18:25:32.973221   77396 logs.go:276] 0 containers: []
	W0828 18:25:32.973233   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:32.973242   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:32.973303   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:33.006096   77396 cri.go:89] found id: ""
	I0828 18:25:33.006125   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.006134   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:33.006139   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:33.006191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:33.038430   77396 cri.go:89] found id: ""
	I0828 18:25:33.038461   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.038472   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:33.038480   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:33.038542   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:33.070266   77396 cri.go:89] found id: ""
	I0828 18:25:33.070294   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.070303   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:33.070315   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:33.070375   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:33.105248   77396 cri.go:89] found id: ""
	I0828 18:25:33.105278   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.105289   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:33.105296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:33.105356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:33.136507   77396 cri.go:89] found id: ""
	I0828 18:25:33.136540   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.136551   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:33.136559   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:33.136618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:33.167333   77396 cri.go:89] found id: ""
	I0828 18:25:33.167359   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.167370   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:33.167377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:33.167442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:33.201302   77396 cri.go:89] found id: ""
	I0828 18:25:33.201331   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.201343   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:33.201352   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:33.201364   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:33.213335   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:33.213361   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:33.278269   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:33.278296   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:33.278310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:33.357015   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:33.357048   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:33.401463   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:33.401495   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:32.402583   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.402749   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:32.075671   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.575951   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.743215   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.243081   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.952911   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:35.965925   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:35.965990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:36.001656   77396 cri.go:89] found id: ""
	I0828 18:25:36.001693   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.001705   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:36.001713   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:36.001784   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:36.035010   77396 cri.go:89] found id: ""
	I0828 18:25:36.035037   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.035045   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:36.035050   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:36.035099   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:36.069113   77396 cri.go:89] found id: ""
	I0828 18:25:36.069148   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.069158   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:36.069164   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:36.069219   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:36.106200   77396 cri.go:89] found id: ""
	I0828 18:25:36.106230   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.106240   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:36.106248   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:36.106316   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:36.138428   77396 cri.go:89] found id: ""
	I0828 18:25:36.138457   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.138468   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:36.138475   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:36.138559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:36.170084   77396 cri.go:89] found id: ""
	I0828 18:25:36.170112   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.170122   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:36.170128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:36.170188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:36.202180   77396 cri.go:89] found id: ""
	I0828 18:25:36.202205   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.202215   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:36.202222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:36.202285   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:36.236125   77396 cri.go:89] found id: ""
	I0828 18:25:36.236156   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.236167   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:36.236179   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:36.236193   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:36.274230   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:36.274256   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:36.325505   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:36.325546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:36.338714   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:36.338741   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:36.406404   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:36.406432   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:36.406448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:38.981942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:38.995287   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:38.995357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:39.028250   77396 cri.go:89] found id: ""
	I0828 18:25:39.028275   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.028282   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:39.028289   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:39.028335   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:39.061402   77396 cri.go:89] found id: ""
	I0828 18:25:39.061434   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.061444   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:39.061449   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:39.061501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:39.095672   77396 cri.go:89] found id: ""
	I0828 18:25:39.095704   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.095716   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:39.095729   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:39.095789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:39.130135   77396 cri.go:89] found id: ""
	I0828 18:25:39.130162   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.130170   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:39.130176   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:39.130239   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:39.168529   77396 cri.go:89] found id: ""
	I0828 18:25:39.168560   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.168571   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:39.168578   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:39.168641   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:39.200786   77396 cri.go:89] found id: ""
	I0828 18:25:39.200813   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.200821   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:39.200828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:39.200876   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:39.232855   77396 cri.go:89] found id: ""
	I0828 18:25:39.232886   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.232894   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:39.232902   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:39.232966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:39.267241   77396 cri.go:89] found id: ""
	I0828 18:25:39.267273   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.267284   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:39.267294   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:39.267309   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:39.306023   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:39.306061   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:39.357880   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:39.357931   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:39.370886   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:39.370914   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:39.448130   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:39.448151   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:39.448163   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:36.403245   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.902238   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:37.075570   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:39.076792   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:40.243633   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.244395   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.027111   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:42.039611   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:42.039687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:42.078052   77396 cri.go:89] found id: ""
	I0828 18:25:42.078093   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.078104   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:42.078111   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:42.078169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:42.112812   77396 cri.go:89] found id: ""
	I0828 18:25:42.112842   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.112851   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:42.112856   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:42.112902   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:42.146846   77396 cri.go:89] found id: ""
	I0828 18:25:42.146875   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.146884   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:42.146891   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:42.146948   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:42.179311   77396 cri.go:89] found id: ""
	I0828 18:25:42.179344   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.179352   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:42.179358   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:42.179422   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:42.212149   77396 cri.go:89] found id: ""
	I0828 18:25:42.212179   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.212192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:42.212200   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:42.212254   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:42.248322   77396 cri.go:89] found id: ""
	I0828 18:25:42.248358   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.248369   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:42.248382   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:42.248496   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:42.283212   77396 cri.go:89] found id: ""
	I0828 18:25:42.283241   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.283250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:42.283257   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:42.283318   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:42.327064   77396 cri.go:89] found id: ""
	I0828 18:25:42.327099   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.327110   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:42.327121   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:42.327135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:42.378545   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:42.378577   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:42.392020   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:42.392045   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:42.464531   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:42.464553   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:42.464564   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:42.543116   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:42.543162   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:45.083935   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:45.096434   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:45.096501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:45.130059   77396 cri.go:89] found id: ""
	I0828 18:25:45.130098   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.130110   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:45.130117   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:45.130176   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:45.160982   77396 cri.go:89] found id: ""
	I0828 18:25:45.161011   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.161021   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:45.161028   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:45.161086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:45.191416   77396 cri.go:89] found id: ""
	I0828 18:25:45.191449   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.191460   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:45.191467   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:45.191524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:41.401456   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:43.401666   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.401772   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:41.575819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.075020   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.743053   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:47.242714   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.223315   77396 cri.go:89] found id: ""
	I0828 18:25:45.223344   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.223360   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:45.223368   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:45.223421   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:45.255404   77396 cri.go:89] found id: ""
	I0828 18:25:45.255428   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.255435   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:45.255441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:45.255487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:45.294671   77396 cri.go:89] found id: ""
	I0828 18:25:45.294705   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.294716   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:45.294724   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:45.294811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:45.329148   77396 cri.go:89] found id: ""
	I0828 18:25:45.329174   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.329186   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:45.329191   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:45.329249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:45.361976   77396 cri.go:89] found id: ""
	I0828 18:25:45.362007   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.362018   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:45.362028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:45.362041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:45.412495   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:45.412530   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:45.425268   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:45.425302   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:45.493451   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:45.493475   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:45.493489   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:45.571427   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:45.571472   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.108133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:48.120632   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:48.120699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:48.156933   77396 cri.go:89] found id: ""
	I0828 18:25:48.156963   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.156973   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:48.156981   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:48.157045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:48.188436   77396 cri.go:89] found id: ""
	I0828 18:25:48.188465   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.188473   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:48.188479   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:48.188524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:48.219558   77396 cri.go:89] found id: ""
	I0828 18:25:48.219588   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.219598   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:48.219605   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:48.219661   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:48.252872   77396 cri.go:89] found id: ""
	I0828 18:25:48.252901   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.252917   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:48.252923   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:48.252975   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:48.288244   77396 cri.go:89] found id: ""
	I0828 18:25:48.288273   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.288283   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:48.288291   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:48.288355   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:48.325077   77396 cri.go:89] found id: ""
	I0828 18:25:48.325114   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.325126   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:48.325134   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:48.325195   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:48.358163   77396 cri.go:89] found id: ""
	I0828 18:25:48.358191   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.358202   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:48.358210   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:48.358259   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:48.409246   77396 cri.go:89] found id: ""
	I0828 18:25:48.409277   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.409287   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:48.409299   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:48.409314   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:48.425228   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:48.425259   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:48.493169   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:48.493188   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:48.493201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:48.573486   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:48.573524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.615846   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:48.615879   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:47.901530   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.901707   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:46.574662   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:48.575614   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.075530   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.244444   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.744518   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.165546   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:51.178743   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:51.178807   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:51.214299   77396 cri.go:89] found id: ""
	I0828 18:25:51.214329   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.214340   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:51.214349   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:51.214426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:51.247057   77396 cri.go:89] found id: ""
	I0828 18:25:51.247086   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.247096   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:51.247103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:51.247174   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:51.279381   77396 cri.go:89] found id: ""
	I0828 18:25:51.279413   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.279423   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:51.279430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:51.279492   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:51.314237   77396 cri.go:89] found id: ""
	I0828 18:25:51.314266   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.314277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:51.314286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:51.314352   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:51.347496   77396 cri.go:89] found id: ""
	I0828 18:25:51.347518   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.347526   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:51.347532   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:51.347578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:51.381705   77396 cri.go:89] found id: ""
	I0828 18:25:51.381742   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.381753   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:51.381762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:51.381816   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:51.413157   77396 cri.go:89] found id: ""
	I0828 18:25:51.413186   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.413196   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:51.413203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:51.413261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:51.443228   77396 cri.go:89] found id: ""
	I0828 18:25:51.443251   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.443266   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:51.443274   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:51.443287   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:51.490927   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:51.490961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:51.505308   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:51.505334   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:51.572077   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:51.572109   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:51.572125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:51.658398   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:51.658441   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:54.199638   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:54.213449   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:54.213525   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:54.249698   77396 cri.go:89] found id: ""
	I0828 18:25:54.249720   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.249727   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:54.249733   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:54.249782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:54.285235   77396 cri.go:89] found id: ""
	I0828 18:25:54.285267   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.285279   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:54.285287   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:54.285344   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:54.322535   77396 cri.go:89] found id: ""
	I0828 18:25:54.322562   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.322571   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:54.322577   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:54.322640   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:54.357995   77396 cri.go:89] found id: ""
	I0828 18:25:54.358025   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.358036   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:54.358045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:54.358129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:54.391112   77396 cri.go:89] found id: ""
	I0828 18:25:54.391137   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.391145   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:54.391150   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:54.391213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:54.424248   77396 cri.go:89] found id: ""
	I0828 18:25:54.424278   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.424288   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:54.424295   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:54.424357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:54.456529   77396 cri.go:89] found id: ""
	I0828 18:25:54.456553   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.456561   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:54.456566   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:54.456619   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:54.489226   77396 cri.go:89] found id: ""
	I0828 18:25:54.489251   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.489259   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:54.489268   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:54.489283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:54.544282   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:54.544318   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:54.557511   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:54.557549   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:54.631057   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:54.631081   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:54.631096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:54.711874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:54.711910   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:51.902237   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.402216   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:53.076058   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:55.577768   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.244062   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:56.244857   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:57.251826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:57.264806   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:57.264872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:57.300005   77396 cri.go:89] found id: ""
	I0828 18:25:57.300031   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.300041   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:57.300049   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:57.300128   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:57.333070   77396 cri.go:89] found id: ""
	I0828 18:25:57.333099   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.333110   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:57.333117   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:57.333181   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:57.369343   77396 cri.go:89] found id: ""
	I0828 18:25:57.369372   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.369390   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:57.369398   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:57.369462   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:57.401729   77396 cri.go:89] found id: ""
	I0828 18:25:57.401756   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.401764   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:57.401770   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:57.401824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:57.432890   77396 cri.go:89] found id: ""
	I0828 18:25:57.432914   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.432921   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:57.432927   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:57.432973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:57.467572   77396 cri.go:89] found id: ""
	I0828 18:25:57.467596   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.467604   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:57.467609   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:57.467663   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:57.500316   77396 cri.go:89] found id: ""
	I0828 18:25:57.500344   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.500351   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:57.500357   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:57.500411   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:57.531676   77396 cri.go:89] found id: ""
	I0828 18:25:57.531700   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.531708   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:57.531716   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:57.531728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:57.604613   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:57.604639   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:57.604653   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:57.684622   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:57.684658   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:57.720566   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:57.720656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:57.770832   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:57.770866   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:56.902012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:59.402189   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.075045   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.575328   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.743586   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.743675   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:01.737703   76435 pod_ready.go:82] duration metric: took 4m0.000480749s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:01.737748   76435 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0828 18:26:01.737772   76435 pod_ready.go:39] duration metric: took 4m13.763880094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:01.737804   76435 kubeadm.go:597] duration metric: took 4m22.607627094s to restartPrimaryControlPlane
	W0828 18:26:01.737875   76435 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:01.737908   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:00.283493   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:00.296500   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:00.296578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:00.334395   77396 cri.go:89] found id: ""
	I0828 18:26:00.334420   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.334428   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:00.334434   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:00.334481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:00.369178   77396 cri.go:89] found id: ""
	I0828 18:26:00.369205   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.369214   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:00.369219   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:00.369283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:00.405962   77396 cri.go:89] found id: ""
	I0828 18:26:00.405990   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.406000   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:00.406007   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:00.406064   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:00.438684   77396 cri.go:89] found id: ""
	I0828 18:26:00.438717   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.438728   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:00.438735   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:00.438795   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:00.472357   77396 cri.go:89] found id: ""
	I0828 18:26:00.472385   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.472397   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:00.472403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:00.472450   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:00.506891   77396 cri.go:89] found id: ""
	I0828 18:26:00.506920   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.506931   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:00.506938   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:00.506999   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:00.546387   77396 cri.go:89] found id: ""
	I0828 18:26:00.546413   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.546422   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:00.546427   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:00.546474   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:00.598714   77396 cri.go:89] found id: ""
	I0828 18:26:00.598745   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.598753   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:00.598761   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:00.598779   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:00.617100   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:00.617130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:00.687317   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:00.687348   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:00.687363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:00.770097   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:00.770130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:00.815848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:00.815883   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:03.365469   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:03.379117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:03.379182   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:03.414122   77396 cri.go:89] found id: ""
	I0828 18:26:03.414148   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.414155   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:03.414161   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:03.414208   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:03.446953   77396 cri.go:89] found id: ""
	I0828 18:26:03.446975   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.446983   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:03.446988   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:03.447036   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:03.481034   77396 cri.go:89] found id: ""
	I0828 18:26:03.481059   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.481067   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:03.481072   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:03.481120   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:03.514785   77396 cri.go:89] found id: ""
	I0828 18:26:03.514814   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.514824   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:03.514832   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:03.514888   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:03.548302   77396 cri.go:89] found id: ""
	I0828 18:26:03.548330   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.548340   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:03.548348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:03.548423   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:03.582430   77396 cri.go:89] found id: ""
	I0828 18:26:03.582460   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.582469   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:03.582476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:03.582529   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:03.615108   77396 cri.go:89] found id: ""
	I0828 18:26:03.615136   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.615144   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:03.615149   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:03.615205   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:03.647282   77396 cri.go:89] found id: ""
	I0828 18:26:03.647312   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.647321   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:03.647330   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:03.647340   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:03.660466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:03.660500   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:03.732746   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:03.732767   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:03.732780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:03.811286   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:03.811320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:03.848482   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:03.848513   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:01.402393   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.402670   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.403016   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.075650   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.574825   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:06.400122   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:06.412839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:06.412908   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:06.448570   77396 cri.go:89] found id: ""
	I0828 18:26:06.448597   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.448608   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:06.448620   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:06.448687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:06.482446   77396 cri.go:89] found id: ""
	I0828 18:26:06.482476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.482487   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:06.482495   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:06.482555   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:06.514640   77396 cri.go:89] found id: ""
	I0828 18:26:06.514669   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.514679   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:06.514686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:06.514747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:06.548997   77396 cri.go:89] found id: ""
	I0828 18:26:06.549020   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.549028   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:06.549034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:06.549079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:06.583557   77396 cri.go:89] found id: ""
	I0828 18:26:06.583582   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.583589   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:06.583595   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:06.583665   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:06.617447   77396 cri.go:89] found id: ""
	I0828 18:26:06.617476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.617484   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:06.617490   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:06.617549   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:06.650387   77396 cri.go:89] found id: ""
	I0828 18:26:06.650419   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.650427   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:06.650433   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:06.650489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:06.682851   77396 cri.go:89] found id: ""
	I0828 18:26:06.682879   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.682888   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:06.682899   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:06.682961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:06.695365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:06.695392   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:06.760214   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:06.760245   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:06.760261   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:06.839827   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:06.839863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:06.877298   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:06.877325   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.430694   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:09.443043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:09.443115   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:09.476557   77396 cri.go:89] found id: ""
	I0828 18:26:09.476583   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.476594   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:09.476602   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:09.476659   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:09.514909   77396 cri.go:89] found id: ""
	I0828 18:26:09.514935   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.514943   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:09.514948   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:09.515009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:09.549769   77396 cri.go:89] found id: ""
	I0828 18:26:09.549800   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.549810   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:09.549818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:09.549868   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:09.582793   77396 cri.go:89] found id: ""
	I0828 18:26:09.582821   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.582831   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:09.582838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:09.582896   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:09.615603   77396 cri.go:89] found id: ""
	I0828 18:26:09.615636   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.615648   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:09.615655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:09.615716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:09.650046   77396 cri.go:89] found id: ""
	I0828 18:26:09.650087   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.650100   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:09.650108   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:09.650161   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:09.681726   77396 cri.go:89] found id: ""
	I0828 18:26:09.681754   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.681763   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:09.681768   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:09.681821   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:09.713008   77396 cri.go:89] found id: ""
	I0828 18:26:09.713036   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.713045   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:09.713054   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:09.713065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:09.792720   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:09.792757   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:09.831752   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:09.831785   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.880877   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:09.880913   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:09.896178   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:09.896215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:09.962282   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:07.901074   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:09.905185   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:08.074185   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:10.075331   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.462957   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:12.475266   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:12.475345   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:12.508364   77396 cri.go:89] found id: ""
	I0828 18:26:12.508394   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.508405   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:12.508413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:12.508472   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:12.544152   77396 cri.go:89] found id: ""
	I0828 18:26:12.544185   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.544197   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:12.544204   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:12.544264   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:12.578358   77396 cri.go:89] found id: ""
	I0828 18:26:12.578384   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.578394   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:12.578403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:12.578456   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:12.609183   77396 cri.go:89] found id: ""
	I0828 18:26:12.609206   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.609214   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:12.609219   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:12.609292   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:12.641791   77396 cri.go:89] found id: ""
	I0828 18:26:12.641816   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.641824   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:12.641830   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:12.641887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:12.673857   77396 cri.go:89] found id: ""
	I0828 18:26:12.673881   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.673889   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:12.673894   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:12.673938   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:12.709501   77396 cri.go:89] found id: ""
	I0828 18:26:12.709525   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.709532   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:12.709538   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:12.709585   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:12.742972   77396 cri.go:89] found id: ""
	I0828 18:26:12.742994   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.743002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:12.743010   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:12.743026   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:12.813949   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:12.813969   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:12.813980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:12.894829   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:12.894873   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:12.939533   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:12.939565   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:12.990319   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:12.990358   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:12.404061   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:14.902346   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.575908   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.075489   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.503923   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:15.518161   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:15.518240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:15.564145   77396 cri.go:89] found id: ""
	I0828 18:26:15.564173   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.564182   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:15.564189   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:15.564249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:15.600654   77396 cri.go:89] found id: ""
	I0828 18:26:15.600682   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.600692   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:15.600699   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:15.600760   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:15.633089   77396 cri.go:89] found id: ""
	I0828 18:26:15.633122   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.633131   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:15.633137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:15.633186   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:15.667339   77396 cri.go:89] found id: ""
	I0828 18:26:15.667370   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.667382   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:15.667389   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:15.667451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:15.699463   77396 cri.go:89] found id: ""
	I0828 18:26:15.699499   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.699508   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:15.699513   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:15.699573   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:15.735841   77396 cri.go:89] found id: ""
	I0828 18:26:15.735866   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.735873   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:15.735879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:15.735929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:15.771111   77396 cri.go:89] found id: ""
	I0828 18:26:15.771135   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.771142   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:15.771148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:15.771198   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:15.804845   77396 cri.go:89] found id: ""
	I0828 18:26:15.804868   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.804875   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:15.804884   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:15.804894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:15.856744   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:15.856780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:15.869496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:15.869520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:15.938957   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:15.938982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:15.938998   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:16.016482   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:16.016525   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:18.554851   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:18.568241   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.568317   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.601401   77396 cri.go:89] found id: ""
	I0828 18:26:18.601439   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.601448   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:18.601454   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.601511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.634784   77396 cri.go:89] found id: ""
	I0828 18:26:18.634809   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.634816   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:18.634822   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.634875   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:18.666540   77396 cri.go:89] found id: ""
	I0828 18:26:18.666572   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.666584   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:18.666591   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:18.666643   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:18.699180   77396 cri.go:89] found id: ""
	I0828 18:26:18.699210   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.699221   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:18.699228   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:18.699289   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:18.735001   77396 cri.go:89] found id: ""
	I0828 18:26:18.735032   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.735042   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:18.735050   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:18.735116   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:18.767404   77396 cri.go:89] found id: ""
	I0828 18:26:18.767441   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.767454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:18.767472   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:18.767537   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:18.798857   77396 cri.go:89] found id: ""
	I0828 18:26:18.798881   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.798890   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:18.798896   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:18.798942   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:18.830113   77396 cri.go:89] found id: ""
	I0828 18:26:18.830137   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.830145   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:18.830153   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:18.830165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:18.843161   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:18.843188   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:18.910736   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:18.910760   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:18.910775   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:18.991698   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:18.991734   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.038896   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.038929   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:17.402193   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:18.902692   76486 pod_ready.go:82] duration metric: took 4m0.007006782s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:18.902716   76486 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:26:18.902724   76486 pod_ready.go:39] duration metric: took 4m4.058254547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:18.902739   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:18.902762   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.902819   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.954071   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:18.954115   76486 cri.go:89] found id: ""
	I0828 18:26:18.954123   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:18.954183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.958270   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.958345   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.994068   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:18.994105   76486 cri.go:89] found id: ""
	I0828 18:26:18.994116   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:18.994173   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.998807   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.998881   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:19.050622   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:19.050649   76486 cri.go:89] found id: ""
	I0828 18:26:19.050657   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:19.050738   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.055283   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:19.055340   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:19.093254   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.093280   76486 cri.go:89] found id: ""
	I0828 18:26:19.093288   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:19.093341   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.097062   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:19.097118   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:19.135962   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.135989   76486 cri.go:89] found id: ""
	I0828 18:26:19.135999   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:19.136046   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.140440   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:19.140510   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:19.176913   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.176942   76486 cri.go:89] found id: ""
	I0828 18:26:19.176951   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:19.177007   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.180742   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:19.180794   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:19.218796   76486 cri.go:89] found id: ""
	I0828 18:26:19.218821   76486 logs.go:276] 0 containers: []
	W0828 18:26:19.218832   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:19.218839   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:19.218898   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:19.253110   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:19.253134   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.253140   76486 cri.go:89] found id: ""
	I0828 18:26:19.253148   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:19.253205   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.257338   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.261148   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:19.261173   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.299620   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:19.299659   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.337533   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:19.337560   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:19.836298   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:19.836350   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.881132   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:19.881168   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.921986   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:19.922023   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.975419   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.975455   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:20.045848   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:20.045895   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:20.059683   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:20.059715   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:20.186442   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:20.186472   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:20.233152   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:20.233187   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:20.278546   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:20.278575   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:20.325985   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:20.326015   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:17.075945   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:19.076890   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:21.590663   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:21.602796   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:21.602860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:21.635583   77396 cri.go:89] found id: ""
	I0828 18:26:21.635612   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.635623   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:21.635631   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:21.635699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:21.666982   77396 cri.go:89] found id: ""
	I0828 18:26:21.667023   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.667034   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:21.667041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:21.667098   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:21.698817   77396 cri.go:89] found id: ""
	I0828 18:26:21.698851   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.698862   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:21.698870   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:21.698925   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:21.729618   77396 cri.go:89] found id: ""
	I0828 18:26:21.729645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.729654   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:21.729660   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:21.729718   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:21.763188   77396 cri.go:89] found id: ""
	I0828 18:26:21.763214   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.763222   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:21.763227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:21.763272   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:21.795613   77396 cri.go:89] found id: ""
	I0828 18:26:21.795645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.795656   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:21.795663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:21.795716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:21.828271   77396 cri.go:89] found id: ""
	I0828 18:26:21.828299   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.828308   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:21.828314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:21.828358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:21.860098   77396 cri.go:89] found id: ""
	I0828 18:26:21.860124   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.860132   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:21.860141   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:21.860155   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:21.908269   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:21.908308   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:21.921123   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:21.921149   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:21.985059   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:21.985078   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:21.985091   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:22.065705   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:22.065745   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:24.608061   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:24.621768   77396 kubeadm.go:597] duration metric: took 4m4.233964466s to restartPrimaryControlPlane
	W0828 18:26:24.621838   77396 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:24.621863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:22.860616   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:22.877760   76486 api_server.go:72] duration metric: took 4m15.760769788s to wait for apiserver process to appear ...
	I0828 18:26:22.877790   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:22.877829   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:22.877891   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:22.924739   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:22.924763   76486 cri.go:89] found id: ""
	I0828 18:26:22.924772   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:22.924831   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.928747   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:22.928810   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:22.967171   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:22.967193   76486 cri.go:89] found id: ""
	I0828 18:26:22.967200   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:22.967247   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.970989   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:22.971048   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:23.004804   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.004830   76486 cri.go:89] found id: ""
	I0828 18:26:23.004839   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:23.004895   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.008551   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:23.008616   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:23.041475   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.041496   76486 cri.go:89] found id: ""
	I0828 18:26:23.041504   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:23.041562   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.045265   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:23.045321   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:23.078749   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.078772   76486 cri.go:89] found id: ""
	I0828 18:26:23.078781   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:23.078827   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.082647   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:23.082712   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:23.117104   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.117128   76486 cri.go:89] found id: ""
	I0828 18:26:23.117138   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:23.117196   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.121011   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:23.121066   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:23.154564   76486 cri.go:89] found id: ""
	I0828 18:26:23.154592   76486 logs.go:276] 0 containers: []
	W0828 18:26:23.154614   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:23.154626   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:23.154689   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:23.192082   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.192101   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.192106   76486 cri.go:89] found id: ""
	I0828 18:26:23.192114   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:23.192175   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.196183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.199786   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:23.199814   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:23.241986   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:23.242019   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.276718   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:23.276750   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:23.353187   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:23.353224   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:23.366901   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:23.366937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.403147   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:23.403181   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.440461   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:23.440491   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.476039   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:23.476067   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.524702   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:23.524743   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.558484   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:23.558510   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:23.994897   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:23.994933   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:24.091558   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:24.091591   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:24.133767   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:24.133801   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:21.575113   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:23.576760   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:26.075770   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:27.939212   76435 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.201267084s)
	I0828 18:26:27.939337   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:27.964796   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:27.978456   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:27.988580   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:27.988599   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:27.988640   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.008900   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.008955   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.020342   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.032723   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.032784   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.049205   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.058740   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.058803   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.067969   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.078089   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.078145   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.086950   76435 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.136931   76435 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 18:26:28.137117   76435 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:26:28.249761   76435 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:26:28.249900   76435 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:26:28.250020   76435 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 18:26:28.258994   76435 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:26:28.261527   76435 out.go:235]   - Generating certificates and keys ...
	I0828 18:26:28.261644   76435 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:26:28.261732   76435 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:26:28.261848   76435 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:26:28.261939   76435 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:26:28.262038   76435 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:26:28.262155   76435 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:26:28.262254   76435 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:26:28.262338   76435 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:26:28.262452   76435 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:26:28.262557   76435 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:26:28.262635   76435 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:26:28.262731   76435 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:26:28.434898   76435 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:26:28.833039   76435 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 18:26:28.930840   76435 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:26:29.103123   76435 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:26:29.201561   76435 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:26:29.202039   76435 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:26:29.204545   76435 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:26:28.691092   77396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.069202982s)
	I0828 18:26:28.691158   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:28.705352   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:28.715421   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:28.724698   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:28.724718   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:28.724771   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.733594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.733676   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.742759   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.752127   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.752187   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.761279   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.770451   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.770518   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.779635   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.788337   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.788405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.797794   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.997476   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:26.682052   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:26:26.687081   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:26:26.687992   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:26.688008   76486 api_server.go:131] duration metric: took 3.810212378s to wait for apiserver health ...
	I0828 18:26:26.688016   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:26.688038   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:26.688084   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:26.729049   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:26.729072   76486 cri.go:89] found id: ""
	I0828 18:26:26.729080   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:26.729127   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.733643   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:26.733710   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:26.774655   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:26.774675   76486 cri.go:89] found id: ""
	I0828 18:26:26.774682   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:26.774732   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.778654   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:26.778704   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:26.812844   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:26.812870   76486 cri.go:89] found id: ""
	I0828 18:26:26.812878   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:26.812928   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.816783   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:26.816847   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:26.856925   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:26.856945   76486 cri.go:89] found id: ""
	I0828 18:26:26.856957   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:26.857013   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.860845   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:26.860906   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:26.893850   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:26.893873   76486 cri.go:89] found id: ""
	I0828 18:26:26.893882   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:26.893940   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.897799   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:26.897875   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:26.932914   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:26.932936   76486 cri.go:89] found id: ""
	I0828 18:26:26.932942   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:26.932993   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.937185   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:26.937253   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:26.980339   76486 cri.go:89] found id: ""
	I0828 18:26:26.980368   76486 logs.go:276] 0 containers: []
	W0828 18:26:26.980379   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:26.980386   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:26.980458   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:27.014870   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.014889   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.014893   76486 cri.go:89] found id: ""
	I0828 18:26:27.014899   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:27.014954   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.018782   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.022146   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:27.022167   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:27.062244   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:27.062271   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:27.097495   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:27.097528   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:27.150300   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:27.150342   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.183651   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:27.183680   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.217641   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:27.217666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:27.286627   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:27.286666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:27.300486   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:27.300514   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:27.409150   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:27.409183   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:27.791378   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:27.791425   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:27.842764   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:27.842799   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:27.892361   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:27.892393   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:27.926469   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:27.926497   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:30.478530   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:26:30.478568   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.478576   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.478583   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.478589   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.478595   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.478608   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.478619   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.478627   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.478637   76486 system_pods.go:74] duration metric: took 3.79061533s to wait for pod list to return data ...
	I0828 18:26:30.478648   76486 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:30.482479   76486 default_sa.go:45] found service account: "default"
	I0828 18:26:30.482507   76486 default_sa.go:55] duration metric: took 3.852493ms for default service account to be created ...
	I0828 18:26:30.482517   76486 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:30.488974   76486 system_pods.go:86] 8 kube-system pods found
	I0828 18:26:30.489014   76486 system_pods.go:89] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.489023   76486 system_pods.go:89] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.489030   76486 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.489038   76486 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.489044   76486 system_pods.go:89] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.489050   76486 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.489062   76486 system_pods.go:89] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.489069   76486 system_pods.go:89] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.489092   76486 system_pods.go:126] duration metric: took 6.568035ms to wait for k8s-apps to be running ...
	I0828 18:26:30.489104   76486 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:30.489163   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:30.508336   76486 system_svc.go:56] duration metric: took 19.222473ms WaitForService to wait for kubelet
	I0828 18:26:30.508369   76486 kubeadm.go:582] duration metric: took 4m23.39138334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:30.508394   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:30.512219   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:30.512253   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:30.512267   76486 node_conditions.go:105] duration metric: took 3.866556ms to run NodePressure ...
	I0828 18:26:30.512282   76486 start.go:241] waiting for startup goroutines ...
	I0828 18:26:30.512291   76486 start.go:246] waiting for cluster config update ...
	I0828 18:26:30.512306   76486 start.go:255] writing updated cluster config ...
	I0828 18:26:30.512681   76486 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:30.579402   76486 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:30.581444   76486 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-640552" cluster and "default" namespace by default
	I0828 18:26:28.575075   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:30.576207   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:29.206147   76435 out.go:235]   - Booting up control plane ...
	I0828 18:26:29.206257   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:26:29.206365   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:26:29.206494   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:26:29.227031   76435 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:26:29.235149   76435 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:26:29.235246   76435 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:26:29.370272   76435 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 18:26:29.370462   76435 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 18:26:29.872896   76435 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733105ms
	I0828 18:26:29.872975   76435 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 18:26:34.877604   76435 kubeadm.go:310] [api-check] The API server is healthy after 5.002276684s
	I0828 18:26:34.892462   76435 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 18:26:34.905804   76435 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 18:26:34.932862   76435 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 18:26:34.933079   76435 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-014980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 18:26:34.944560   76435 kubeadm.go:310] [bootstrap-token] Using token: nwgkdo.9yj47woyyi233z66
	I0828 18:26:34.945933   76435 out.go:235]   - Configuring RBAC rules ...
	I0828 18:26:34.946052   76435 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 18:26:34.951430   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 18:26:34.963862   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 18:26:34.968038   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 18:26:34.971350   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 18:26:34.977521   76435 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 18:26:35.282249   76435 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 18:26:35.704101   76435 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 18:26:36.282971   76435 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 18:26:36.284216   76435 kubeadm.go:310] 
	I0828 18:26:36.284337   76435 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 18:26:36.284364   76435 kubeadm.go:310] 
	I0828 18:26:36.284457   76435 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 18:26:36.284470   76435 kubeadm.go:310] 
	I0828 18:26:36.284504   76435 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 18:26:36.284579   76435 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 18:26:36.284654   76435 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 18:26:36.284667   76435 kubeadm.go:310] 
	I0828 18:26:36.284748   76435 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 18:26:36.284758   76435 kubeadm.go:310] 
	I0828 18:26:36.284820   76435 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 18:26:36.284826   76435 kubeadm.go:310] 
	I0828 18:26:36.284891   76435 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 18:26:36.284988   76435 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 18:26:36.285081   76435 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 18:26:36.285091   76435 kubeadm.go:310] 
	I0828 18:26:36.285197   76435 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 18:26:36.285298   76435 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 18:26:36.285309   76435 kubeadm.go:310] 
	I0828 18:26:36.285414   76435 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285549   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 18:26:36.285572   76435 kubeadm.go:310] 	--control-plane 
	I0828 18:26:36.285577   76435 kubeadm.go:310] 
	I0828 18:26:36.285655   76435 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 18:26:36.285663   76435 kubeadm.go:310] 
	I0828 18:26:36.285757   76435 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285886   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 18:26:36.287195   76435 kubeadm.go:310] W0828 18:26:28.113155    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287529   76435 kubeadm.go:310] W0828 18:26:28.114038    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287633   76435 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:36.287659   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:26:36.287669   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:26:36.289019   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:26:33.075886   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:35.076651   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:36.290213   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:26:36.302171   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:26:36.326384   76435 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:26:36.326452   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:36.326522   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-014980 minikube.k8s.io/updated_at=2024_08_28T18_26_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=embed-certs-014980 minikube.k8s.io/primary=true
	I0828 18:26:36.537331   76435 ops.go:34] apiserver oom_adj: -16
	I0828 18:26:36.537497   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.038467   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.537529   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.038147   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.537854   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.038193   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.538325   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.037978   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.537503   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.038001   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.160327   76435 kubeadm.go:1113] duration metric: took 4.83392727s to wait for elevateKubeSystemPrivileges
	I0828 18:26:41.160366   76435 kubeadm.go:394] duration metric: took 5m2.080700509s to StartCluster
	I0828 18:26:41.160386   76435 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.160469   76435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:26:41.162122   76435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.162393   76435 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:26:41.162463   76435 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:26:41.162547   76435 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-014980"
	I0828 18:26:41.162563   76435 addons.go:69] Setting default-storageclass=true in profile "embed-certs-014980"
	I0828 18:26:41.162588   76435 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-014980"
	I0828 18:26:41.162586   76435 addons.go:69] Setting metrics-server=true in profile "embed-certs-014980"
	W0828 18:26:41.162599   76435 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:26:41.162610   76435 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-014980"
	I0828 18:26:41.162632   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162635   76435 addons.go:234] Setting addon metrics-server=true in "embed-certs-014980"
	W0828 18:26:41.162644   76435 addons.go:243] addon metrics-server should already be in state true
	I0828 18:26:41.162666   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162612   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:26:41.163042   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163054   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163083   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163095   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163140   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163160   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.164216   76435 out.go:177] * Verifying Kubernetes components...
	I0828 18:26:41.166298   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:26:41.178807   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0828 18:26:41.178914   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0828 18:26:41.179437   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179515   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179971   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.179994   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180168   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.180197   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180346   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180629   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180982   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181021   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.181761   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181810   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.182920   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
	I0828 18:26:41.183394   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.183877   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.183900   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.184252   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.184450   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.187788   76435 addons.go:234] Setting addon default-storageclass=true in "embed-certs-014980"
	W0828 18:26:41.187811   76435 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:26:41.187837   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.188210   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.188242   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.199469   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I0828 18:26:41.199977   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.200461   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.200487   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.200894   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.201121   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.201369   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0828 18:26:41.201749   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.202224   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.202243   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.202811   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.203024   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.203030   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.205127   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.205217   76435 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:26:41.206606   76435 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.206620   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:26:41.206633   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.206678   76435 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:26:37.575308   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:39.575726   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:41.207928   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:26:41.207951   76435 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:26:41.207971   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.208651   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0828 18:26:41.209208   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.210020   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.210040   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.210477   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.210537   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211056   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211089   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211123   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211166   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211313   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.211443   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.211572   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211588   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211580   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.211600   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.211636   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.211828   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211996   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.212159   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.212271   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.228122   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
	I0828 18:26:41.228552   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.229000   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.229016   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.229309   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.229565   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.231484   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.231721   76435 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.231732   76435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:26:41.231744   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.234525   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.234901   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.234933   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.235097   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.235259   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.235412   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.235585   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.375620   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:26:41.420534   76435 node_ready.go:35] waiting up to 6m0s for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429069   76435 node_ready.go:49] node "embed-certs-014980" has status "Ready":"True"
	I0828 18:26:41.429090   76435 node_ready.go:38] duration metric: took 8.530462ms for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429098   76435 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:41.438842   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:41.484936   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.535672   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.536914   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:26:41.536936   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:26:41.604181   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:26:41.604219   76435 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:26:41.654668   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.654695   76435 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:26:41.688039   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.921155   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921188   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921465   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:41.921544   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.921568   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921577   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921842   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921863   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.938676   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.938694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.938984   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.939034   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690412   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154689373s)
	I0828 18:26:42.690461   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690469   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.690766   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.690810   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690830   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690843   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.691076   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.691114   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.691122   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.722795   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.034719218s)
	I0828 18:26:42.722840   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.722852   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723141   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.723210   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723231   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723249   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.723261   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723539   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723567   76435 addons.go:475] Verifying addon metrics-server=true in "embed-certs-014980"
	I0828 18:26:42.725524   76435 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0828 18:26:42.726507   76435 addons.go:510] duration metric: took 1.564045136s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0828 18:26:41.576259   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:44.075008   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:46.075323   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:43.445262   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:45.445672   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:47.948313   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:48.446506   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.446527   76435 pod_ready.go:82] duration metric: took 7.007660638s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.446538   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451954   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.451973   76435 pod_ready.go:82] duration metric: took 5.430099ms for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451983   76435 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456910   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.456937   76435 pod_ready.go:82] duration metric: took 4.947692ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456948   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963231   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.963252   76435 pod_ready.go:82] duration metric: took 1.506296167s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963262   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967762   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.967780   76435 pod_ready.go:82] duration metric: took 4.511839ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967788   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043820   76435 pod_ready.go:93] pod "kube-proxy-hzw4m" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.043844   76435 pod_ready.go:82] duration metric: took 76.049661ms for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043855   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443261   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.443288   76435 pod_ready.go:82] duration metric: took 399.423823ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443298   76435 pod_ready.go:39] duration metric: took 9.014190636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:50.443315   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:50.443375   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:50.459400   76435 api_server.go:72] duration metric: took 9.296966752s to wait for apiserver process to appear ...
	I0828 18:26:50.459426   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:50.459448   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:26:50.463861   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:26:50.464779   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:50.464807   76435 api_server.go:131] duration metric: took 5.370633ms to wait for apiserver health ...
	I0828 18:26:50.464817   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:50.645588   76435 system_pods.go:59] 9 kube-system pods found
	I0828 18:26:50.645620   76435 system_pods.go:61] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:50.645626   76435 system_pods.go:61] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:50.645629   76435 system_pods.go:61] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:50.645633   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:50.645636   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:50.645639   76435 system_pods.go:61] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:50.645642   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:50.645647   76435 system_pods.go:61] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:50.645651   76435 system_pods.go:61] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:50.645658   76435 system_pods.go:74] duration metric: took 180.831741ms to wait for pod list to return data ...
	I0828 18:26:50.645664   76435 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:50.844171   76435 default_sa.go:45] found service account: "default"
	I0828 18:26:50.844205   76435 default_sa.go:55] duration metric: took 198.534118ms for default service account to be created ...
	I0828 18:26:50.844217   76435 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:51.045810   76435 system_pods.go:86] 9 kube-system pods found
	I0828 18:26:51.045839   76435 system_pods.go:89] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:51.045844   76435 system_pods.go:89] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:51.045848   76435 system_pods.go:89] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:51.045852   76435 system_pods.go:89] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:51.045856   76435 system_pods.go:89] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:51.045859   76435 system_pods.go:89] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:51.045865   76435 system_pods.go:89] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:51.045871   76435 system_pods.go:89] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:51.045874   76435 system_pods.go:89] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:51.045882   76435 system_pods.go:126] duration metric: took 201.659747ms to wait for k8s-apps to be running ...
	I0828 18:26:51.045889   76435 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:51.045930   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:51.060123   76435 system_svc.go:56] duration metric: took 14.22252ms WaitForService to wait for kubelet
	I0828 18:26:51.060159   76435 kubeadm.go:582] duration metric: took 9.897729666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:51.060184   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:51.244017   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:51.244042   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:51.244052   76435 node_conditions.go:105] duration metric: took 183.862561ms to run NodePressure ...
	I0828 18:26:51.244063   76435 start.go:241] waiting for startup goroutines ...
	I0828 18:26:51.244069   76435 start.go:246] waiting for cluster config update ...
	I0828 18:26:51.244080   76435 start.go:255] writing updated cluster config ...
	I0828 18:26:51.244398   76435 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:51.291241   76435 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:51.293227   76435 out.go:177] * Done! kubectl is now configured to use "embed-certs-014980" cluster and "default" namespace by default
	I0828 18:26:48.075513   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:50.576810   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:53.075100   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:55.075381   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:57.076055   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:59.575251   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:01.575306   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:04.075576   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.076392   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.575514   75908 pod_ready.go:82] duration metric: took 4m0.006537109s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:27:06.575539   75908 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:27:06.575549   75908 pod_ready.go:39] duration metric: took 4m3.208242253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:27:06.575566   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:27:06.575596   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:06.575649   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:06.625222   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:06.625247   75908 cri.go:89] found id: ""
	I0828 18:27:06.625257   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:06.625317   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.629941   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:06.630003   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:06.665372   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:06.665400   75908 cri.go:89] found id: ""
	I0828 18:27:06.665410   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:06.665472   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.669511   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:06.669599   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:06.709706   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:06.709734   75908 cri.go:89] found id: ""
	I0828 18:27:06.709742   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:06.709801   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.713964   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:06.714023   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:06.748110   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:06.748136   75908 cri.go:89] found id: ""
	I0828 18:27:06.748158   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:06.748217   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.752020   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:06.752087   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:06.788455   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:06.788476   75908 cri.go:89] found id: ""
	I0828 18:27:06.788483   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:06.788537   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.792710   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:06.792779   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:06.830031   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:06.830055   75908 cri.go:89] found id: ""
	I0828 18:27:06.830065   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:06.830147   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.833910   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:06.833970   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:06.869172   75908 cri.go:89] found id: ""
	I0828 18:27:06.869199   75908 logs.go:276] 0 containers: []
	W0828 18:27:06.869210   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:06.869217   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:06.869281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:06.906605   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:06.906626   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:06.906632   75908 cri.go:89] found id: ""
	I0828 18:27:06.906644   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:06.906705   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.911374   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.915494   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:06.915515   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:06.961094   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:06.961128   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:07.018511   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:07.018543   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:07.058413   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:07.058443   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:07.098028   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:07.098055   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:07.136706   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:07.136731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:07.203021   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:07.203059   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:07.239714   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:07.239758   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:07.746282   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:07.746326   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:07.812731   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:07.812771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:07.828453   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:07.828484   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:07.967513   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:07.967610   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:08.013719   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:08.013745   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.553418   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:27:10.569945   75908 api_server.go:72] duration metric: took 4m14.476728398s to wait for apiserver process to appear ...
	I0828 18:27:10.569977   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:27:10.570010   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:10.570057   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:10.605869   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:10.605899   75908 cri.go:89] found id: ""
	I0828 18:27:10.605908   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:10.606013   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.609868   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:10.609949   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:10.647627   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:10.647655   75908 cri.go:89] found id: ""
	I0828 18:27:10.647664   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:10.647721   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.651916   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:10.651980   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:10.690782   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:10.690805   75908 cri.go:89] found id: ""
	I0828 18:27:10.690815   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:10.690870   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.694896   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:10.694944   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:10.735502   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:10.735530   75908 cri.go:89] found id: ""
	I0828 18:27:10.735541   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:10.735603   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.739627   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:10.739702   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:10.776213   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:10.776233   75908 cri.go:89] found id: ""
	I0828 18:27:10.776240   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:10.776293   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.779889   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:10.779948   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:10.815919   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:10.815949   75908 cri.go:89] found id: ""
	I0828 18:27:10.815958   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:10.816022   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.820317   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:10.820385   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:10.859049   75908 cri.go:89] found id: ""
	I0828 18:27:10.859077   75908 logs.go:276] 0 containers: []
	W0828 18:27:10.859085   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:10.859091   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:10.859138   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:10.894511   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.894543   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.894549   75908 cri.go:89] found id: ""
	I0828 18:27:10.894558   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:10.894616   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.899725   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.907315   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:10.907339   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.941374   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:10.941401   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:11.372069   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:11.372111   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:11.425168   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:11.425192   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:11.439748   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:11.439771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:11.484252   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:11.484278   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:11.522975   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:11.523000   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:11.590753   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:11.590797   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:11.629694   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:11.629725   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:11.667597   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:11.667627   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:11.732423   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:11.732469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:11.841885   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:11.841929   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:11.885703   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:11.885741   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.428276   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:27:14.433359   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:27:14.434430   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:27:14.434448   75908 api_server.go:131] duration metric: took 3.864464723s to wait for apiserver health ...
	I0828 18:27:14.434458   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:27:14.434487   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:14.434545   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:14.472125   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.472153   75908 cri.go:89] found id: ""
	I0828 18:27:14.472163   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:14.472225   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.476217   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:14.476281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:14.514886   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:14.514904   75908 cri.go:89] found id: ""
	I0828 18:27:14.514911   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:14.514965   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.518930   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:14.519000   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:14.556279   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.556302   75908 cri.go:89] found id: ""
	I0828 18:27:14.556311   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:14.556356   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.560542   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:14.560612   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:14.604981   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:14.605008   75908 cri.go:89] found id: ""
	I0828 18:27:14.605017   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:14.605076   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.608769   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:14.608833   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:14.644014   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:14.644036   75908 cri.go:89] found id: ""
	I0828 18:27:14.644044   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:14.644089   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.648138   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:14.648211   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:14.686898   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:14.686919   75908 cri.go:89] found id: ""
	I0828 18:27:14.686926   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:14.686971   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.690752   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:14.690818   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:14.723146   75908 cri.go:89] found id: ""
	I0828 18:27:14.723174   75908 logs.go:276] 0 containers: []
	W0828 18:27:14.723185   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:14.723200   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:14.723264   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:14.758168   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.758196   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:14.758202   75908 cri.go:89] found id: ""
	I0828 18:27:14.758212   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:14.758269   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.761928   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.765388   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:14.765407   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.798567   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:14.798598   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:14.841992   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:14.842024   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:14.947020   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:14.947050   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.996788   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:14.996815   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:15.031706   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:15.031731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:15.065813   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:15.065839   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:15.121439   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:15.121469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:15.535661   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:15.535709   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:15.603334   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:15.603374   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:15.619628   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:15.619657   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:15.661179   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:15.661203   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:15.697954   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:15.697983   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:18.238105   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:27:18.238137   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.238144   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.238149   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.238154   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.238158   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.238163   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.238171   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.238177   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.238187   75908 system_pods.go:74] duration metric: took 3.803722719s to wait for pod list to return data ...
	I0828 18:27:18.238198   75908 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:27:18.240936   75908 default_sa.go:45] found service account: "default"
	I0828 18:27:18.240955   75908 default_sa.go:55] duration metric: took 2.749733ms for default service account to be created ...
	I0828 18:27:18.240963   75908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:27:18.245768   75908 system_pods.go:86] 8 kube-system pods found
	I0828 18:27:18.245793   75908 system_pods.go:89] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.245800   75908 system_pods.go:89] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.245806   75908 system_pods.go:89] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.245810   75908 system_pods.go:89] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.245815   75908 system_pods.go:89] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.245820   75908 system_pods.go:89] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.245829   75908 system_pods.go:89] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.245838   75908 system_pods.go:89] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.245851   75908 system_pods.go:126] duration metric: took 4.881291ms to wait for k8s-apps to be running ...
	I0828 18:27:18.245862   75908 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:27:18.245909   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:27:18.260429   75908 system_svc.go:56] duration metric: took 14.56108ms WaitForService to wait for kubelet
	I0828 18:27:18.260458   75908 kubeadm.go:582] duration metric: took 4m22.167245383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:27:18.260489   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:27:18.262765   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:27:18.262784   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:27:18.262793   75908 node_conditions.go:105] duration metric: took 2.299468ms to run NodePressure ...
	I0828 18:27:18.262803   75908 start.go:241] waiting for startup goroutines ...
	I0828 18:27:18.262810   75908 start.go:246] waiting for cluster config update ...
	I0828 18:27:18.262820   75908 start.go:255] writing updated cluster config ...
	I0828 18:27:18.263070   75908 ssh_runner.go:195] Run: rm -f paused
	I0828 18:27:18.312755   75908 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:27:18.314827   75908 out.go:177] * Done! kubectl is now configured to use "no-preload-072854" cluster and "default" namespace by default
	I0828 18:28:25.556329   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:28:25.556449   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:28:25.558031   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:28:25.558117   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:28:25.558222   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:28:25.558363   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:28:25.558517   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:28:25.558594   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:28:25.561046   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:28:25.561124   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:28:25.561179   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:28:25.561288   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:28:25.561384   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:28:25.561489   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:28:25.561562   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:28:25.561797   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:28:25.561914   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:28:25.562010   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:28:25.562230   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:28:25.562294   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:28:25.562402   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:28:25.562478   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:28:25.562554   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:28:25.562706   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:28:25.562818   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:28:25.562926   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:28:25.563006   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:28:25.563043   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:28:25.563144   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:28:25.564527   77396 out.go:235]   - Booting up control plane ...
	I0828 18:28:25.564629   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:28:25.564716   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:28:25.564816   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:28:25.564929   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:28:25.565154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:28:25.565226   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:28:25.565326   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565541   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.565660   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565895   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566002   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566184   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566245   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566411   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566473   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566629   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566636   77396 kubeadm.go:310] 
	I0828 18:28:25.566672   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:28:25.566706   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:28:25.566712   77396 kubeadm.go:310] 
	I0828 18:28:25.566740   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:28:25.566769   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:28:25.566881   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:28:25.566893   77396 kubeadm.go:310] 
	I0828 18:28:25.567033   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:28:25.567080   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:28:25.567126   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:28:25.567142   77396 kubeadm.go:310] 
	I0828 18:28:25.567276   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:28:25.567351   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:28:25.567358   77396 kubeadm.go:310] 
	I0828 18:28:25.567461   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:28:25.567534   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:28:25.567612   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:28:25.567689   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:28:25.567726   77396 kubeadm.go:310] 
	W0828 18:28:25.567820   77396 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0828 18:28:25.567858   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:28:26.036779   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:28:26.051771   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:28:26.060912   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:28:26.060932   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:28:26.060971   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:28:26.069420   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:28:26.069486   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:28:26.078268   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:28:26.086594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:28:26.086669   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:28:26.095756   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.104747   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:28:26.104809   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.113847   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:28:26.122600   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:28:26.122673   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:28:26.131697   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:28:26.338828   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:30:22.315132   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:30:22.315271   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:30:22.316887   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:30:22.316970   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:30:22.317067   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:30:22.317199   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:30:22.317289   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:30:22.317340   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:30:22.319318   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:30:22.319406   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:30:22.319461   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:30:22.319540   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:30:22.319620   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:30:22.319715   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:30:22.319791   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:30:22.319888   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:30:22.319972   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:30:22.320068   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:30:22.320161   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:30:22.320232   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:30:22.320312   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:30:22.320362   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:30:22.320411   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:30:22.320468   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:30:22.320511   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:30:22.320627   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:30:22.320748   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:30:22.320805   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:30:22.320922   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:30:22.322522   77396 out.go:235]   - Booting up control plane ...
	I0828 18:30:22.322640   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:30:22.322739   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:30:22.322843   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:30:22.322939   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:30:22.323154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:30:22.323234   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:30:22.323320   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323518   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323616   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323851   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323947   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324157   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324215   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324383   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324448   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324605   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324614   77396 kubeadm.go:310] 
	I0828 18:30:22.324651   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:30:22.324685   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:30:22.324694   77396 kubeadm.go:310] 
	I0828 18:30:22.324726   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:30:22.324755   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:30:22.324846   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:30:22.324853   77396 kubeadm.go:310] 
	I0828 18:30:22.324939   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:30:22.324971   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:30:22.325003   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:30:22.325009   77396 kubeadm.go:310] 
	I0828 18:30:22.325137   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:30:22.325259   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:30:22.325271   77396 kubeadm.go:310] 
	I0828 18:30:22.325394   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:30:22.325485   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:30:22.325599   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:30:22.325707   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:30:22.325725   77396 kubeadm.go:310] 
	I0828 18:30:22.325793   77396 kubeadm.go:394] duration metric: took 8m1.985321645s to StartCluster
	I0828 18:30:22.325845   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:30:22.325912   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:30:22.369637   77396 cri.go:89] found id: ""
	I0828 18:30:22.369669   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.369680   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:30:22.369687   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:30:22.369748   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:30:22.404363   77396 cri.go:89] found id: ""
	I0828 18:30:22.404395   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.404404   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:30:22.404412   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:30:22.404477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:30:22.439923   77396 cri.go:89] found id: ""
	I0828 18:30:22.439949   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.439956   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:30:22.439962   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:30:22.440016   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:30:22.480139   77396 cri.go:89] found id: ""
	I0828 18:30:22.480169   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.480186   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:30:22.480195   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:30:22.480255   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:30:22.517020   77396 cri.go:89] found id: ""
	I0828 18:30:22.517053   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.517064   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:30:22.517075   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:30:22.517151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:30:22.551369   77396 cri.go:89] found id: ""
	I0828 18:30:22.551391   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.551399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:30:22.551409   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:30:22.551458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:30:22.585656   77396 cri.go:89] found id: ""
	I0828 18:30:22.585686   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.585697   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:30:22.585704   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:30:22.585781   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:30:22.620157   77396 cri.go:89] found id: ""
	I0828 18:30:22.620190   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.620201   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:30:22.620212   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:30:22.620230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:30:22.634209   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:30:22.634245   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:30:22.711047   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:30:22.711082   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:30:22.711096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:30:22.816037   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:30:22.816075   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:30:22.885999   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:30:22.886029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 18:30:22.936793   77396 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0828 18:30:22.936856   77396 out.go:270] * 
	W0828 18:30:22.936920   77396 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.936941   77396 out.go:270] * 
	W0828 18:30:22.937749   77396 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:30:22.941026   77396 out.go:201] 
	W0828 18:30:22.942189   77396 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.942300   77396 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0828 18:30:22.942335   77396 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0828 18:30:22.943829   77396 out.go:201] 
	
	
	==> CRI-O <==
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.211115714Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870368211091970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23badbe5-4efe-4927-bfc9-772ce31afc88 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.211699041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ce7fbe0d-d4af-469a-9b4b-50435e96f88d name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.211749972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ce7fbe0d-d4af-469a-9b4b-50435e96f88d name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.211780702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ce7fbe0d-d4af-469a-9b4b-50435e96f88d name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.242818006Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c81c726-bfd6-4da9-b1a0-3610cbb0be43 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.242886844Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c81c726-bfd6-4da9-b1a0-3610cbb0be43 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.244392346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22309e65-465e-496d-8c7f-dc1901b15168 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.244977102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870368244945415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22309e65-465e-496d-8c7f-dc1901b15168 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.245767308Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b7f6b328-78f4-48ca-b19b-333bd49810ca name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.245824167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b7f6b328-78f4-48ca-b19b-333bd49810ca name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.245863887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b7f6b328-78f4-48ca-b19b-333bd49810ca name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.276817583Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e4fbca2a-7415-485d-8c24-b7228671d530 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.276896892Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e4fbca2a-7415-485d-8c24-b7228671d530 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.278299124Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9e89b29-106a-4e47-9f5c-c2a5ca62abbd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.278788065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870368278758047,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9e89b29-106a-4e47-9f5c-c2a5ca62abbd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.279500944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=daa1f282-531d-408f-ac1d-c2ba5a67db9c name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.279556995Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=daa1f282-531d-408f-ac1d-c2ba5a67db9c name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.279595978Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=daa1f282-531d-408f-ac1d-c2ba5a67db9c name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.314483647Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9dbebbb4-d255-4517-8ba1-3725b3e39c89 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.314558658Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9dbebbb4-d255-4517-8ba1-3725b3e39c89 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.315888805Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c5ae6af-fb00-4152-9d69-579a0741e960 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.316253290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870368316232666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c5ae6af-fb00-4152-9d69-579a0741e960 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.316835081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a83e8235-2074-470f-abae-fee76cc078a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.316885213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a83e8235-2074-470f-abae-fee76cc078a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:39:28 old-k8s-version-131737 crio[633]: time="2024-08-28 18:39:28.316917011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a83e8235-2074-470f-abae-fee76cc078a7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug28 18:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053841] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038492] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.861305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug28 18:22] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.351947] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.186067] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.056442] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067838] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.210439] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.181798] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.238436] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.531745] systemd-fstab-generator[889]: Ignoring "noauto" option for root device
	[  +0.068173] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.717012] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[ +12.982776] kauditd_printk_skb: 46 callbacks suppressed
	[Aug28 18:26] systemd-fstab-generator[5132]: Ignoring "noauto" option for root device
	[Aug28 18:28] systemd-fstab-generator[5416]: Ignoring "noauto" option for root device
	[  +0.064360] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:39:28 up 17 min,  0 users,  load average: 0.01, 0.06, 0.08
	Linux old-k8s-version-131737 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:628 +0x53
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a6d070, 0xc00093b660)
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: goroutine 156 [chan receive]:
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000a75560)
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: goroutine 157 [select]:
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c13ef0, 0x4f0ac20, 0xc000bac320, 0x1, 0xc00009e0c0)
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001d4380, 0xc00009e0c0)
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000a6d0b0, 0xc00093b720)
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 28 18:39:28 old-k8s-version-131737 kubelet[6611]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 28 18:39:28 old-k8s-version-131737 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 28 18:39:28 old-k8s-version-131737 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131737 -n old-k8s-version-131737
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 2 (225.199283ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-131737" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (484.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-28 18:43:37.351576888 +0000 UTC m=+6736.402138877
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-640552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-640552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.687µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-640552 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-640552 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-640552 logs -n 25: (1.124957397s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-072854                  | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC | 28 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-131737        | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-014980                 | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-640552       | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-131737             | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:41 UTC | 28 Aug 24 18:41 UTC |
	| start   | -p newest-cni-835349 --memory=2200 --alsologtostderr   | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:41 UTC | 28 Aug 24 18:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC | 28 Aug 24 18:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-341028 | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC | 28 Aug 24 18:42 UTC |
	|         | disable-driver-mounts-341028                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-835349             | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC | 28 Aug 24 18:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-835349                                   | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC | 28 Aug 24 18:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-835349                  | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC | 28 Aug 24 18:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-835349 --memory=2200 --alsologtostderr   | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC | 28 Aug 24 18:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC | 28 Aug 24 18:43 UTC |
	| image   | newest-cni-835349 image list                           | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC | 28 Aug 24 18:43 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-835349                                   | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC | 28 Aug 24 18:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-835349                                   | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC | 28 Aug 24 18:43 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-835349                                   | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC | 28 Aug 24 18:43 UTC |
	| delete  | -p newest-cni-835349                                   | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:43 UTC | 28 Aug 24 18:43 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 18:42:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 18:42:40.288520   84345 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:42:40.288629   84345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:42:40.288640   84345 out.go:358] Setting ErrFile to fd 2...
	I0828 18:42:40.288647   84345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:42:40.288859   84345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:42:40.289411   84345 out.go:352] Setting JSON to false
	I0828 18:42:40.290417   84345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8706,"bootTime":1724861854,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:42:40.290486   84345 start.go:139] virtualization: kvm guest
	I0828 18:42:40.292544   84345 out.go:177] * [newest-cni-835349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:42:40.293877   84345 notify.go:220] Checking for updates...
	I0828 18:42:40.293899   84345 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:42:40.295165   84345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:42:40.296239   84345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:42:40.297389   84345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:42:40.298464   84345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:42:40.299455   84345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:42:40.300829   84345 config.go:182] Loaded profile config "newest-cni-835349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:42:40.301223   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:42:40.301281   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:42:40.316802   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I0828 18:42:40.317197   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:42:40.317713   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:42:40.317736   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:42:40.318103   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:42:40.318325   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:40.318579   84345 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:42:40.318851   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:42:40.318883   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:42:40.333400   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I0828 18:42:40.333834   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:42:40.334362   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:42:40.334395   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:42:40.334765   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:42:40.334954   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:40.371608   84345 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 18:42:40.372752   84345 start.go:297] selected driver: kvm2
	I0828 18:42:40.372773   84345 start.go:901] validating driver "kvm2" against &{Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:42:40.372899   84345 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:42:40.373590   84345 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:42:40.373655   84345 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:42:40.388558   84345 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:42:40.388950   84345 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0828 18:42:40.389020   84345 cni.go:84] Creating CNI manager for ""
	I0828 18:42:40.389036   84345 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:42:40.389084   84345 start.go:340] cluster config:
	{Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-835349 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:42:40.389209   84345 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:42:40.391037   84345 out.go:177] * Starting "newest-cni-835349" primary control-plane node in "newest-cni-835349" cluster
	I0828 18:42:40.392145   84345 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:42:40.392175   84345 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:42:40.392181   84345 cache.go:56] Caching tarball of preloaded images
	I0828 18:42:40.392272   84345 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:42:40.392285   84345 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 18:42:40.392387   84345 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/config.json ...
	I0828 18:42:40.392558   84345 start.go:360] acquireMachinesLock for newest-cni-835349: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:42:40.392601   84345 start.go:364] duration metric: took 24.588µs to acquireMachinesLock for "newest-cni-835349"
	I0828 18:42:40.392616   84345 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:42:40.392631   84345 fix.go:54] fixHost starting: 
	I0828 18:42:40.392898   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:42:40.392939   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:42:40.407254   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I0828 18:42:40.407666   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:42:40.408077   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:42:40.408095   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:42:40.408429   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:42:40.408605   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:40.408778   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetState
	I0828 18:42:40.410257   84345 fix.go:112] recreateIfNeeded on newest-cni-835349: state=Stopped err=<nil>
	I0828 18:42:40.410297   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	W0828 18:42:40.410465   84345 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:42:40.412487   84345 out.go:177] * Restarting existing kvm2 VM for "newest-cni-835349" ...
	I0828 18:42:40.413859   84345 main.go:141] libmachine: (newest-cni-835349) Calling .Start
	I0828 18:42:40.414029   84345 main.go:141] libmachine: (newest-cni-835349) Ensuring networks are active...
	I0828 18:42:40.414975   84345 main.go:141] libmachine: (newest-cni-835349) Ensuring network default is active
	I0828 18:42:40.415308   84345 main.go:141] libmachine: (newest-cni-835349) Ensuring network mk-newest-cni-835349 is active
	I0828 18:42:40.415756   84345 main.go:141] libmachine: (newest-cni-835349) Getting domain xml...
	I0828 18:42:40.416466   84345 main.go:141] libmachine: (newest-cni-835349) Creating domain...
	I0828 18:42:41.643666   84345 main.go:141] libmachine: (newest-cni-835349) Waiting to get IP...
	I0828 18:42:41.644576   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:41.645049   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:41.645096   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:41.645013   84380 retry.go:31] will retry after 261.688627ms: waiting for machine to come up
	I0828 18:42:41.908525   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:41.909063   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:41.909096   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:41.909010   84380 retry.go:31] will retry after 273.446367ms: waiting for machine to come up
	I0828 18:42:42.184438   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:42.184942   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:42.184964   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:42.184890   84380 retry.go:31] will retry after 385.016034ms: waiting for machine to come up
	I0828 18:42:42.571427   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:42.571875   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:42.571907   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:42.571821   84380 retry.go:31] will retry after 409.149804ms: waiting for machine to come up
	I0828 18:42:42.982309   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:42.982802   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:42.982823   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:42.982771   84380 retry.go:31] will retry after 743.553719ms: waiting for machine to come up
	I0828 18:42:43.727664   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:43.728153   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:43.728178   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:43.728113   84380 retry.go:31] will retry after 587.31043ms: waiting for machine to come up
	I0828 18:42:44.316697   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:44.317200   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:44.317227   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:44.317141   84380 retry.go:31] will retry after 934.216078ms: waiting for machine to come up
	I0828 18:42:45.253352   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:45.253911   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:45.253936   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:45.253865   84380 retry.go:31] will retry after 1.088835525s: waiting for machine to come up
	I0828 18:42:46.344716   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:46.345216   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:46.345246   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:46.345168   84380 retry.go:31] will retry after 1.716287117s: waiting for machine to come up
	I0828 18:42:48.063044   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:48.063482   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:48.063511   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:48.063439   84380 retry.go:31] will retry after 1.549324706s: waiting for machine to come up
	I0828 18:42:49.615165   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:49.615635   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:49.615664   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:49.615575   84380 retry.go:31] will retry after 2.003187438s: waiting for machine to come up
	I0828 18:42:51.620638   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:51.621074   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:51.621100   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:51.621025   84380 retry.go:31] will retry after 3.445816523s: waiting for machine to come up
	I0828 18:42:55.068243   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:55.068716   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:55.068748   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:55.068673   84380 retry.go:31] will retry after 3.263238671s: waiting for machine to come up
	I0828 18:42:58.335793   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.336295   84345 main.go:141] libmachine: (newest-cni-835349) Found IP for machine: 192.168.50.179
	I0828 18:42:58.336349   84345 main.go:141] libmachine: (newest-cni-835349) Reserving static IP address...
	I0828 18:42:58.336365   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has current primary IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.336868   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "newest-cni-835349", mac: "52:54:00:53:3a:ba", ip: "192.168.50.179"} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.336911   84345 main.go:141] libmachine: (newest-cni-835349) Reserved static IP address: 192.168.50.179
	I0828 18:42:58.336932   84345 main.go:141] libmachine: (newest-cni-835349) DBG | skip adding static IP to network mk-newest-cni-835349 - found existing host DHCP lease matching {name: "newest-cni-835349", mac: "52:54:00:53:3a:ba", ip: "192.168.50.179"}
	I0828 18:42:58.336958   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Getting to WaitForSSH function...
	I0828 18:42:58.336979   84345 main.go:141] libmachine: (newest-cni-835349) Waiting for SSH to be available...
	I0828 18:42:58.339449   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.339876   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.339906   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.340067   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Using SSH client type: external
	I0828 18:42:58.340091   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa (-rw-------)
	I0828 18:42:58.340140   84345 main.go:141] libmachine: (newest-cni-835349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:42:58.340156   84345 main.go:141] libmachine: (newest-cni-835349) DBG | About to run SSH command:
	I0828 18:42:58.340174   84345 main.go:141] libmachine: (newest-cni-835349) DBG | exit 0
	I0828 18:42:58.462048   84345 main.go:141] libmachine: (newest-cni-835349) DBG | SSH cmd err, output: <nil>: 
	I0828 18:42:58.462372   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetConfigRaw
	I0828 18:42:58.462985   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:42:58.465100   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.465464   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.465498   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.465703   84345 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/config.json ...
	I0828 18:42:58.465890   84345 machine.go:93] provisionDockerMachine start ...
	I0828 18:42:58.465911   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:58.466145   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:58.468355   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.468750   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.468795   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.468847   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:58.469021   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.469178   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.469297   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:58.469486   84345 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:58.469663   84345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:58.469672   84345 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:42:58.566455   84345 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:42:58.566488   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetMachineName
	I0828 18:42:58.566777   84345 buildroot.go:166] provisioning hostname "newest-cni-835349"
	I0828 18:42:58.566806   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetMachineName
	I0828 18:42:58.566991   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:58.569678   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.570031   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.570061   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.570214   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:58.570404   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.570561   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.570697   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:58.570955   84345 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:58.571156   84345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:58.571173   84345 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-835349 && echo "newest-cni-835349" | sudo tee /etc/hostname
	I0828 18:42:58.679405   84345 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-835349
	
	I0828 18:42:58.679441   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:58.682125   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.682477   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.682502   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.682668   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:58.682838   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.682999   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.683108   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:58.683303   84345 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:58.683457   84345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:58.683473   84345 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-835349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-835349/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-835349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:42:58.790240   84345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:42:58.790270   84345 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:42:58.790292   84345 buildroot.go:174] setting up certificates
	I0828 18:42:58.790308   84345 provision.go:84] configureAuth start
	I0828 18:42:58.790320   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetMachineName
	I0828 18:42:58.790653   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:42:58.793453   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.793847   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.793877   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.794044   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:58.796517   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.796900   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.796932   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.797049   84345 provision.go:143] copyHostCerts
	I0828 18:42:58.797110   84345 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:42:58.797132   84345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:42:58.797212   84345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:42:58.797383   84345 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:42:58.797394   84345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:42:58.797439   84345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:42:58.797550   84345 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:42:58.797561   84345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:42:58.797600   84345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:42:58.797684   84345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.newest-cni-835349 san=[127.0.0.1 192.168.50.179 localhost minikube newest-cni-835349]
	I0828 18:42:58.887168   84345 provision.go:177] copyRemoteCerts
	I0828 18:42:58.887220   84345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:42:58.887246   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:58.889749   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.890048   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.890105   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.890261   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:58.890434   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.890590   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:58.890768   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:58.967818   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:42:58.990102   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0828 18:42:59.013763   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:42:59.036731   84345 provision.go:87] duration metric: took 246.412579ms to configureAuth
	I0828 18:42:59.036757   84345 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:42:59.036968   84345 config.go:182] Loaded profile config "newest-cni-835349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:42:59.037100   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:59.039916   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.040274   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.040314   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.040484   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:59.040730   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.040901   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.041031   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:59.041190   84345 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:59.041409   84345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:59.041432   84345 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:42:59.253819   84345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:42:59.253847   84345 machine.go:96] duration metric: took 787.945536ms to provisionDockerMachine
	I0828 18:42:59.253859   84345 start.go:293] postStartSetup for "newest-cni-835349" (driver="kvm2")
	I0828 18:42:59.253898   84345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:42:59.253917   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:59.254256   84345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:42:59.254283   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:59.256843   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.257105   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.257144   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.257306   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:59.257533   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.257707   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:59.257825   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:59.336860   84345 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:42:59.341675   84345 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:42:59.341704   84345 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:42:59.341768   84345 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:42:59.341877   84345 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:42:59.341992   84345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:42:59.351114   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:42:59.373583   84345 start.go:296] duration metric: took 119.70869ms for postStartSetup
	I0828 18:42:59.373638   84345 fix.go:56] duration metric: took 18.981012092s for fixHost
	I0828 18:42:59.373664   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:59.376250   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.376600   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.376636   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.376806   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:59.377019   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.377185   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.377356   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:59.377550   84345 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:59.377739   84345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:59.377750   84345 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:42:59.474399   84345 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724870579.434596442
	
	I0828 18:42:59.474420   84345 fix.go:216] guest clock: 1724870579.434596442
	I0828 18:42:59.474428   84345 fix.go:229] Guest: 2024-08-28 18:42:59.434596442 +0000 UTC Remote: 2024-08-28 18:42:59.373643401 +0000 UTC m=+19.120583395 (delta=60.953041ms)
	I0828 18:42:59.474447   84345 fix.go:200] guest clock delta is within tolerance: 60.953041ms
	I0828 18:42:59.474461   84345 start.go:83] releasing machines lock for "newest-cni-835349", held for 19.081852477s
	I0828 18:42:59.474479   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:59.474739   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:42:59.477422   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.477745   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.477776   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.477867   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:59.478338   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:59.478518   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:59.478610   84345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:42:59.478663   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:59.478723   84345 ssh_runner.go:195] Run: cat /version.json
	I0828 18:42:59.478748   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:59.481237   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.481584   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.481608   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.481627   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.481768   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:59.481954   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.482066   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.482093   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:59.482106   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.482287   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:59.482292   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:59.482473   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.482639   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:59.482805   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:59.594061   84345 ssh_runner.go:195] Run: systemctl --version
	I0828 18:42:59.600110   84345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:42:59.740000   84345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:42:59.745780   84345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:42:59.745843   84345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:42:59.761529   84345 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:42:59.761551   84345 start.go:495] detecting cgroup driver to use...
	I0828 18:42:59.761617   84345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:42:59.777658   84345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:42:59.791169   84345 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:42:59.791218   84345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:42:59.804618   84345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:42:59.817494   84345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:42:59.932207   84345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:43:00.092809   84345 docker.go:233] disabling docker service ...
	I0828 18:43:00.092914   84345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:43:00.106715   84345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:43:00.119540   84345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:43:00.226683   84345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:43:00.345919   84345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:43:00.359139   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:43:00.375915   84345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:43:00.375972   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.385221   84345 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:43:00.385285   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.394715   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.404210   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.414289   84345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:43:00.424754   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.435023   84345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.451963   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.462132   84345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:43:00.471706   84345 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:43:00.471765   84345 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:43:00.485233   84345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:43:00.494526   84345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:43:00.605408   84345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:43:00.695412   84345 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:43:00.695487   84345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:43:00.699894   84345 start.go:563] Will wait 60s for crictl version
	I0828 18:43:00.699948   84345 ssh_runner.go:195] Run: which crictl
	I0828 18:43:00.703281   84345 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:43:00.739959   84345 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:43:00.740060   84345 ssh_runner.go:195] Run: crio --version
	I0828 18:43:00.766683   84345 ssh_runner.go:195] Run: crio --version
	I0828 18:43:00.796928   84345 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:43:00.798223   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:43:00.800754   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:00.801014   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:43:00.801045   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:00.801251   84345 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:43:00.805066   84345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:43:00.818774   84345 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0828 18:43:00.819917   84345 kubeadm.go:883] updating cluster {Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:43:00.820036   84345 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:43:00.820100   84345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:43:00.856136   84345 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:43:00.856206   84345 ssh_runner.go:195] Run: which lz4
	I0828 18:43:00.859828   84345 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:43:00.863554   84345 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:43:00.863579   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:43:02.079244   84345 crio.go:462] duration metric: took 1.219446174s to copy over tarball
	I0828 18:43:02.079330   84345 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:43:04.178656   84345 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.099279079s)
	I0828 18:43:04.178693   84345 crio.go:469] duration metric: took 2.099414937s to extract the tarball
	I0828 18:43:04.178703   84345 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:43:04.217298   84345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:43:04.266008   84345 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:43:04.266031   84345 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:43:04.266039   84345 kubeadm.go:934] updating node { 192.168.50.179 8443 v1.31.0 crio true true} ...
	I0828 18:43:04.266194   84345 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-835349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:43:04.266288   84345 ssh_runner.go:195] Run: crio config
	I0828 18:43:04.314721   84345 cni.go:84] Creating CNI manager for ""
	I0828 18:43:04.314740   84345 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:43:04.314749   84345 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0828 18:43:04.314772   84345 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.179 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-835349 NodeName:newest-cni-835349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:43:04.314961   84345 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-835349"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:43:04.315039   84345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:43:04.326490   84345 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:43:04.326558   84345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:43:04.336465   84345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0828 18:43:04.353990   84345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:43:04.370747   84345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0828 18:43:04.387487   84345 ssh_runner.go:195] Run: grep 192.168.50.179	control-plane.minikube.internal$ /etc/hosts
	I0828 18:43:04.391098   84345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:43:04.402869   84345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:43:04.524434   84345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:43:04.549616   84345 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349 for IP: 192.168.50.179
	I0828 18:43:04.549640   84345 certs.go:194] generating shared ca certs ...
	I0828 18:43:04.549662   84345 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:04.549830   84345 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:43:04.549885   84345 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:43:04.549899   84345 certs.go:256] generating profile certs ...
	I0828 18:43:04.549996   84345 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/client.key
	I0828 18:43:04.550088   84345 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.key.0d40501c
	I0828 18:43:04.550147   84345 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.key
	I0828 18:43:04.550287   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:43:04.550318   84345 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:43:04.550328   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:43:04.550363   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:43:04.550405   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:43:04.550451   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:43:04.550556   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:43:04.551378   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:43:04.607537   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:43:04.640395   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:43:04.676623   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:43:04.713030   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 18:43:04.736395   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 18:43:04.760024   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:43:04.785509   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:43:04.809605   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:43:04.832771   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:43:04.855465   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:43:04.879459   84345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:43:04.895101   84345 ssh_runner.go:195] Run: openssl version
	I0828 18:43:04.900559   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:43:04.910869   84345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:43:04.914964   84345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:43:04.915019   84345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:43:04.920727   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:43:04.932458   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:43:04.943845   84345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:43:04.948500   84345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:43:04.948563   84345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:43:04.954037   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:43:04.964951   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:43:04.974931   84345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:43:04.979256   84345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:43:04.979315   84345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:43:04.984797   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:43:04.994940   84345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:43:04.999323   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:43:05.005085   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:43:05.010924   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:43:05.016560   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:43:05.022100   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:43:05.027387   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:43:05.032655   84345 kubeadm.go:392] StartCluster: {Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:43:05.032738   84345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:43:05.032775   84345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:43:05.069173   84345 cri.go:89] found id: ""
	I0828 18:43:05.069252   84345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:43:05.079874   84345 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:43:05.079904   84345 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:43:05.079956   84345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:43:05.089635   84345 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:43:05.090509   84345 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-835349" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:43:05.091095   84345 kubeconfig.go:62] /home/jenkins/minikube-integration/19529-10317/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-835349" cluster setting kubeconfig missing "newest-cni-835349" context setting]
	I0828 18:43:05.091990   84345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:05.093668   84345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:43:05.104043   84345 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.179
	I0828 18:43:05.104075   84345 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:43:05.104086   84345 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:43:05.104129   84345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:43:05.149020   84345 cri.go:89] found id: ""
	I0828 18:43:05.149096   84345 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:43:05.165415   84345 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:43:05.174673   84345 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:43:05.174694   84345 kubeadm.go:157] found existing configuration files:
	
	I0828 18:43:05.174738   84345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:43:05.183716   84345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:43:05.183787   84345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:43:05.192521   84345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:43:05.200837   84345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:43:05.200899   84345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:43:05.211883   84345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:43:05.221422   84345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:43:05.221481   84345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:43:05.231366   84345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:43:05.239908   84345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:43:05.239980   84345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:43:05.248516   84345 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:43:05.257366   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:43:05.365457   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:43:06.202132   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:43:06.392657   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:43:06.469239   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:43:06.560092   84345 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:43:06.560237   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:07.061270   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:07.560526   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:08.060341   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:08.560458   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:09.060260   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:09.074008   84345 api_server.go:72] duration metric: took 2.513928542s to wait for apiserver process to appear ...
	I0828 18:43:09.074038   84345 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:43:09.074062   84345 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I0828 18:43:10.785004   84345 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:43:10.785040   84345 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:43:10.785057   84345 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I0828 18:43:10.793607   84345 api_server.go:279] https://192.168.50.179:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:43:10.793638   84345 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:43:11.075116   84345 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I0828 18:43:11.080246   84345 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:43:11.080286   84345 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:43:11.574443   84345 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I0828 18:43:11.587056   84345 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:43:11.587090   84345 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:43:12.074414   84345 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I0828 18:43:12.080189   84345 api_server.go:279] https://192.168.50.179:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:43:12.080213   84345 api_server.go:103] status: https://192.168.50.179:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:43:12.574806   84345 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I0828 18:43:12.582941   84345 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I0828 18:43:12.592254   84345 api_server.go:141] control plane version: v1.31.0
	I0828 18:43:12.592282   84345 api_server.go:131] duration metric: took 3.518236412s to wait for apiserver health ...
	I0828 18:43:12.592291   84345 cni.go:84] Creating CNI manager for ""
	I0828 18:43:12.592297   84345 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:43:12.594408   84345 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:43:12.595742   84345 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:43:12.616364   84345 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:43:12.658203   84345 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:43:12.673492   84345 system_pods.go:59] 8 kube-system pods found
	I0828 18:43:12.673537   84345 system_pods.go:61] "coredns-6f6b679f8f-h8vs8" [41d7190f-3e79-4fbd-8329-e9ade42cfe65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:43:12.673550   84345 system_pods.go:61] "etcd-newest-cni-835349" [64f4ca83-67b0-4fbe-965a-bc8cb63cf7a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:43:12.673562   84345 system_pods.go:61] "kube-apiserver-newest-cni-835349" [c34fc0af-fc3d-4000-ad16-c7c059ccf937] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:43:12.673574   84345 system_pods.go:61] "kube-controller-manager-newest-cni-835349" [6a2b2e65-2fd0-4c8e-80b8-cfaa6f9ffb85] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:43:12.673584   84345 system_pods.go:61] "kube-proxy-g455f" [4718f57c-80e0-4fd2-9638-086c4a93cd0f] Running
	I0828 18:43:12.673596   84345 system_pods.go:61] "kube-scheduler-newest-cni-835349" [a3dcfa85-587d-4838-86b2-0260d02a4651] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:43:12.673608   84345 system_pods.go:61] "metrics-server-6867b74b74-kpcm7" [54cc0fe7-704a-4064-8e9c-f246bc3a2ec0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:43:12.673618   84345 system_pods.go:61] "storage-provisioner" [4886d70b-d84c-4b8a-b2eb-e38ba9957e51] Running
	I0828 18:43:12.673636   84345 system_pods.go:74] duration metric: took 15.409146ms to wait for pod list to return data ...
	I0828 18:43:12.673650   84345 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:43:12.677819   84345 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:43:12.677849   84345 node_conditions.go:123] node cpu capacity is 2
	I0828 18:43:12.677865   84345 node_conditions.go:105] duration metric: took 4.203177ms to run NodePressure ...
	I0828 18:43:12.677886   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:43:12.940931   84345 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:43:12.954148   84345 ops.go:34] apiserver oom_adj: -16
	I0828 18:43:12.954171   84345 kubeadm.go:597] duration metric: took 7.87425981s to restartPrimaryControlPlane
	I0828 18:43:12.954183   84345 kubeadm.go:394] duration metric: took 7.921542573s to StartCluster
	I0828 18:43:12.954203   84345 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:12.954290   84345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:43:12.955427   84345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:12.955715   84345 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:43:12.955780   84345 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:43:12.955890   84345 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-835349"
	I0828 18:43:12.955923   84345 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-835349"
	W0828 18:43:12.955931   84345 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:43:12.955965   84345 host.go:66] Checking if "newest-cni-835349" exists ...
	I0828 18:43:12.955981   84345 config.go:182] Loaded profile config "newest-cni-835349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:43:12.956033   84345 addons.go:69] Setting default-storageclass=true in profile "newest-cni-835349"
	I0828 18:43:12.956067   84345 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-835349"
	I0828 18:43:12.956360   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:43:12.956412   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:43:12.956436   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:43:12.956473   84345 addons.go:69] Setting dashboard=true in profile "newest-cni-835349"
	I0828 18:43:12.956513   84345 addons.go:234] Setting addon dashboard=true in "newest-cni-835349"
	I0828 18:43:12.956505   84345 addons.go:69] Setting metrics-server=true in profile "newest-cni-835349"
	W0828 18:43:12.956526   84345 addons.go:243] addon dashboard should already be in state true
	I0828 18:43:12.956545   84345 addons.go:234] Setting addon metrics-server=true in "newest-cni-835349"
	W0828 18:43:12.956559   84345 addons.go:243] addon metrics-server should already be in state true
	I0828 18:43:12.956561   84345 host.go:66] Checking if "newest-cni-835349" exists ...
	I0828 18:43:12.956587   84345 host.go:66] Checking if "newest-cni-835349" exists ...
	I0828 18:43:12.956658   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:43:12.956922   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:43:12.956965   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:43:12.957008   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:43:12.957021   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:43:12.957456   84345 out.go:177] * Verifying Kubernetes components...
	I0828 18:43:12.958947   84345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:43:12.972334   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41343
	I0828 18:43:12.972551   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I0828 18:43:12.972739   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:43:12.972960   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:43:12.973211   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:43:12.973233   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:43:12.973416   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:43:12.973435   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:43:12.973692   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:43:12.973984   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:43:12.974067   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetState
	I0828 18:43:12.974587   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:43:12.974655   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:43:12.975621   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46585
	I0828 18:43:12.976038   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:43:12.976403   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41091
	I0828 18:43:12.976655   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:43:12.976684   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:43:12.976727   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:43:12.977074   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:43:12.977220   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:43:12.977244   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:43:12.977490   84345 addons.go:234] Setting addon default-storageclass=true in "newest-cni-835349"
	W0828 18:43:12.977502   84345 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:43:12.977526   84345 host.go:66] Checking if "newest-cni-835349" exists ...
	I0828 18:43:12.977537   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:43:12.977639   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:43:12.977687   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:43:12.977860   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:43:12.977888   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:43:12.978040   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:43:12.978090   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:43:12.993074   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
	I0828 18:43:12.993077   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I0828 18:43:12.993532   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:43:12.993619   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:43:12.993992   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:43:12.994007   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:43:12.994161   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:43:12.994185   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:43:12.994396   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:43:12.994572   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:43:12.994745   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetState
	I0828 18:43:12.994941   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:43:12.994979   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:43:12.996567   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:43:12.996579   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0828 18:43:12.996963   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:43:12.997468   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:43:12.997490   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:43:12.997828   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:43:12.998029   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetState
	I0828 18:43:12.998598   84345 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0828 18:43:12.999746   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:43:13.001060   84345 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0828 18:43:13.001083   84345 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:43:13.002358   84345 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0828 18:43:13.002388   84345 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0828 18:43:13.002424   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:43:13.002462   84345 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:43:13.002479   84345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:43:13.002493   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:43:13.003031   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36813
	I0828 18:43:13.003545   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:43:13.004086   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:43:13.004128   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:43:13.004497   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:43:13.004681   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetState
	I0828 18:43:13.006683   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:13.006723   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:13.006728   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:43:13.006750   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:43:13.006775   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:13.006920   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:43:13.007268   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:43:13.007294   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:13.007324   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:43:13.007556   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:43:13.007622   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:43:13.007696   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:43:13.007748   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:43:13.008052   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:43:13.008270   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:43:13.008443   84345 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:43:13.009794   84345 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:43:13.009809   84345 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:43:13.009821   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:43:13.012617   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:13.013052   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:43:13.013067   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:13.013278   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:43:13.013453   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:43:13.013630   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:43:13.013758   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:43:13.018610   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I0828 18:43:13.018915   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:43:13.019301   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:43:13.019313   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:43:13.019557   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:43:13.019757   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetState
	I0828 18:43:13.021204   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:43:13.021404   84345 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:43:13.021417   84345 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:43:13.021433   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:43:13.024166   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:13.024562   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:43:13.024589   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:13.024723   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:43:13.024885   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:43:13.024992   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:43:13.025124   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:43:13.173243   84345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:43:13.194785   84345 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:43:13.194878   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:13.208365   84345 api_server.go:72] duration metric: took 252.608527ms to wait for apiserver process to appear ...
	I0828 18:43:13.208396   84345 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:43:13.208414   84345 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	I0828 18:43:13.212998   84345 api_server.go:279] https://192.168.50.179:8443/healthz returned 200:
	ok
	I0828 18:43:13.213910   84345 api_server.go:141] control plane version: v1.31.0
	I0828 18:43:13.213929   84345 api_server.go:131] duration metric: took 5.52672ms to wait for apiserver health ...
	I0828 18:43:13.213936   84345 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:43:13.219867   84345 system_pods.go:59] 8 kube-system pods found
	I0828 18:43:13.219907   84345 system_pods.go:61] "coredns-6f6b679f8f-h8vs8" [41d7190f-3e79-4fbd-8329-e9ade42cfe65] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:43:13.219918   84345 system_pods.go:61] "etcd-newest-cni-835349" [64f4ca83-67b0-4fbe-965a-bc8cb63cf7a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:43:13.219931   84345 system_pods.go:61] "kube-apiserver-newest-cni-835349" [c34fc0af-fc3d-4000-ad16-c7c059ccf937] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:43:13.219941   84345 system_pods.go:61] "kube-controller-manager-newest-cni-835349" [6a2b2e65-2fd0-4c8e-80b8-cfaa6f9ffb85] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:43:13.219947   84345 system_pods.go:61] "kube-proxy-g455f" [4718f57c-80e0-4fd2-9638-086c4a93cd0f] Running
	I0828 18:43:13.219955   84345 system_pods.go:61] "kube-scheduler-newest-cni-835349" [a3dcfa85-587d-4838-86b2-0260d02a4651] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:43:13.219967   84345 system_pods.go:61] "metrics-server-6867b74b74-kpcm7" [54cc0fe7-704a-4064-8e9c-f246bc3a2ec0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:43:13.219976   84345 system_pods.go:61] "storage-provisioner" [4886d70b-d84c-4b8a-b2eb-e38ba9957e51] Running
	I0828 18:43:13.219984   84345 system_pods.go:74] duration metric: took 6.042201ms to wait for pod list to return data ...
	I0828 18:43:13.219996   84345 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:43:13.222347   84345 default_sa.go:45] found service account: "default"
	I0828 18:43:13.222368   84345 default_sa.go:55] duration metric: took 2.362685ms for default service account to be created ...
	I0828 18:43:13.222382   84345 kubeadm.go:582] duration metric: took 266.630136ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0828 18:43:13.222404   84345 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:43:13.224244   84345 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:43:13.224264   84345 node_conditions.go:123] node cpu capacity is 2
	I0828 18:43:13.224274   84345 node_conditions.go:105] duration metric: took 1.864488ms to run NodePressure ...
	I0828 18:43:13.224286   84345 start.go:241] waiting for startup goroutines ...
	I0828 18:43:13.311934   84345 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0828 18:43:13.311957   84345 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0828 18:43:13.313714   84345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:43:13.353516   84345 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0828 18:43:13.353538   84345 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0828 18:43:13.354543   84345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:43:13.354543   84345 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:43:13.354625   84345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:43:13.385440   84345 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:43:13.385469   84345 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:43:13.402739   84345 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0828 18:43:13.402763   84345 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0828 18:43:13.424269   84345 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0828 18:43:13.424289   84345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0828 18:43:13.448791   84345 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0828 18:43:13.448812   84345 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0828 18:43:13.471859   84345 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:43:13.471888   84345 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:43:13.499559   84345 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0828 18:43:13.499591   84345 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0828 18:43:13.526884   84345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:43:13.548388   84345 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0828 18:43:13.548415   84345 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0828 18:43:13.639018   84345 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0828 18:43:13.639047   84345 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0828 18:43:13.683897   84345 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0828 18:43:13.683927   84345 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0828 18:43:13.702700   84345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0828 18:43:14.752352   84345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.39776884s)
	I0828 18:43:14.752412   84345 main.go:141] libmachine: Making call to close driver server
	I0828 18:43:14.752426   84345 main.go:141] libmachine: (newest-cni-835349) Calling .Close
	I0828 18:43:14.752621   84345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.438876993s)
	I0828 18:43:14.752680   84345 main.go:141] libmachine: Making call to close driver server
	I0828 18:43:14.752697   84345 main.go:141] libmachine: (newest-cni-835349) Calling .Close
	I0828 18:43:14.752789   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Closing plugin on server side
	I0828 18:43:14.752808   84345 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:43:14.752817   84345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:43:14.752825   84345 main.go:141] libmachine: Making call to close driver server
	I0828 18:43:14.752832   84345 main.go:141] libmachine: (newest-cni-835349) Calling .Close
	I0828 18:43:14.752897   84345 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:43:14.752907   84345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:43:14.752916   84345 main.go:141] libmachine: Making call to close driver server
	I0828 18:43:14.752927   84345 main.go:141] libmachine: (newest-cni-835349) Calling .Close
	I0828 18:43:14.753239   84345 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:43:14.753255   84345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:43:14.753265   84345 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:43:14.753278   84345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:43:14.753414   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Closing plugin on server side
	I0828 18:43:14.764048   84345 main.go:141] libmachine: Making call to close driver server
	I0828 18:43:14.764073   84345 main.go:141] libmachine: (newest-cni-835349) Calling .Close
	I0828 18:43:14.764340   84345 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:43:14.764392   84345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:43:14.764412   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Closing plugin on server side
	I0828 18:43:14.829410   84345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.302453655s)
	I0828 18:43:14.829463   84345 main.go:141] libmachine: Making call to close driver server
	I0828 18:43:14.829473   84345 main.go:141] libmachine: (newest-cni-835349) Calling .Close
	I0828 18:43:14.829790   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Closing plugin on server side
	I0828 18:43:14.829838   84345 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:43:14.829847   84345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:43:14.829858   84345 main.go:141] libmachine: Making call to close driver server
	I0828 18:43:14.829868   84345 main.go:141] libmachine: (newest-cni-835349) Calling .Close
	I0828 18:43:14.830104   84345 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:43:14.830120   84345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:43:14.830136   84345 addons.go:475] Verifying addon metrics-server=true in "newest-cni-835349"
	I0828 18:43:15.164184   84345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.461428799s)
	I0828 18:43:15.164235   84345 main.go:141] libmachine: Making call to close driver server
	I0828 18:43:15.164251   84345 main.go:141] libmachine: (newest-cni-835349) Calling .Close
	I0828 18:43:15.164561   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Closing plugin on server side
	I0828 18:43:15.164602   84345 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:43:15.164610   84345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:43:15.164623   84345 main.go:141] libmachine: Making call to close driver server
	I0828 18:43:15.164630   84345 main.go:141] libmachine: (newest-cni-835349) Calling .Close
	I0828 18:43:15.164880   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Closing plugin on server side
	I0828 18:43:15.164916   84345 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:43:15.164929   84345 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:43:15.166722   84345 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-835349 addons enable metrics-server
	
	I0828 18:43:15.168150   84345 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0828 18:43:15.169548   84345 addons.go:510] duration metric: took 2.213768613s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0828 18:43:15.169592   84345 start.go:246] waiting for cluster config update ...
	I0828 18:43:15.169606   84345 start.go:255] writing updated cluster config ...
	I0828 18:43:15.169916   84345 ssh_runner.go:195] Run: rm -f paused
	I0828 18:43:15.218200   84345 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:43:15.219975   84345 out.go:177] * Done! kubectl is now configured to use "newest-cni-835349" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 28 18:43:37 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:37.951223658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870617951191904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eda595cf-9906-4e97-b9cb-673695378023 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:37 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:37.951952724Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=84f1c537-213a-4090-be6c-4acbcf8119cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:37 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:37.952046093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=84f1c537-213a-4090-be6c-4acbcf8119cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:37 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:37.952396439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869355659557944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e194ddce09e9049094d2848c807efa8388a9e0e07a1f1b2e4bd4bcb33e5f5ea,PodSandboxId:d007dc2e2c3a31bb7df7222f791e9126fa7ee1311a769dfdb4d08503b02e7b0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869335696268301,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f826550a-fcfa-4f39-9c73-44834e6e4721,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db,PodSandboxId:ae6e975f4de6504de0ce883436df054f3c65a21194f003f6049cbc88b36f6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869332522555962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t5lx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a7dcfb-266b-4eb2-bdfb-e8153da41df1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869324800568946,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee,PodSandboxId:9d02549c1f5435d119bcd657d9af568c60692fb748c19ae93aae257bbfda3612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869324803349140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmpft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddc57ae-4f38-4fd3-aa82
-5552ba727d88,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143,PodSandboxId:ad350345059c3492da2c02f8e20182d914adedbf18a1664949f3f48720490f98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869321241027158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be64182f68d88a91e8f5a225d2d1d695,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342,PodSandboxId:9e3e4c6602381bacf00a6cd2c0d9959ad0ee129416447c787a729a1fbd6673c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869321228710657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf8dc78d48c852701ab852fe447b50,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286,PodSandboxId:1489f217ae6f57711d39b42e40ad1ea0982809ba36be08c8eecb2ecc826c523d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869321222829800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fa28440663f54746801eb6a944d
ea8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12,PodSandboxId:cc29489220c3640aede6abf03ade44624e43725cb746d7994be3c5d45eeb7111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869321208847863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b3be8f1c9d0b215d7fcb36c3f9a97
6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=84f1c537-213a-4090-be6c-4acbcf8119cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:37 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:37.993174004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8509647-5610-4d75-9aca-b88477a775d9 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:37 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:37.993263124Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8509647-5610-4d75-9aca-b88477a775d9 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:37 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:37.995132174Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1dfdeda-c246-44c0-8f1e-deadb5837e9d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:37 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:37.995618939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870617995594645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1dfdeda-c246-44c0-8f1e-deadb5837e9d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:37 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:37.996236486Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6140454-8f92-4a91-b393-05f7ec60beaf name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:37 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:37.996335162Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6140454-8f92-4a91-b393-05f7ec60beaf name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:37 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:37.996571865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869355659557944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e194ddce09e9049094d2848c807efa8388a9e0e07a1f1b2e4bd4bcb33e5f5ea,PodSandboxId:d007dc2e2c3a31bb7df7222f791e9126fa7ee1311a769dfdb4d08503b02e7b0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869335696268301,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f826550a-fcfa-4f39-9c73-44834e6e4721,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db,PodSandboxId:ae6e975f4de6504de0ce883436df054f3c65a21194f003f6049cbc88b36f6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869332522555962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t5lx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a7dcfb-266b-4eb2-bdfb-e8153da41df1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869324800568946,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee,PodSandboxId:9d02549c1f5435d119bcd657d9af568c60692fb748c19ae93aae257bbfda3612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869324803349140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmpft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddc57ae-4f38-4fd3-aa82
-5552ba727d88,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143,PodSandboxId:ad350345059c3492da2c02f8e20182d914adedbf18a1664949f3f48720490f98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869321241027158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be64182f68d88a91e8f5a225d2d1d695,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342,PodSandboxId:9e3e4c6602381bacf00a6cd2c0d9959ad0ee129416447c787a729a1fbd6673c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869321228710657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf8dc78d48c852701ab852fe447b50,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286,PodSandboxId:1489f217ae6f57711d39b42e40ad1ea0982809ba36be08c8eecb2ecc826c523d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869321222829800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fa28440663f54746801eb6a944d
ea8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12,PodSandboxId:cc29489220c3640aede6abf03ade44624e43725cb746d7994be3c5d45eeb7111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869321208847863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b3be8f1c9d0b215d7fcb36c3f9a97
6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e6140454-8f92-4a91-b393-05f7ec60beaf name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.040636124Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b6a2154a-353f-4b7d-ad38-a763b98754fe name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.040711784Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b6a2154a-353f-4b7d-ad38-a763b98754fe name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.041794537Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0b5c4d9-489f-422f-a70c-80fb55367f40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.042195043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870618042173918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0b5c4d9-489f-422f-a70c-80fb55367f40 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.042759385Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc0cafaf-c0c9-45f0-b3c3-7708b9c7c27c name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.042826815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc0cafaf-c0c9-45f0-b3c3-7708b9c7c27c name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.043020190Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869355659557944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e194ddce09e9049094d2848c807efa8388a9e0e07a1f1b2e4bd4bcb33e5f5ea,PodSandboxId:d007dc2e2c3a31bb7df7222f791e9126fa7ee1311a769dfdb4d08503b02e7b0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869335696268301,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f826550a-fcfa-4f39-9c73-44834e6e4721,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db,PodSandboxId:ae6e975f4de6504de0ce883436df054f3c65a21194f003f6049cbc88b36f6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869332522555962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t5lx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a7dcfb-266b-4eb2-bdfb-e8153da41df1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869324800568946,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee,PodSandboxId:9d02549c1f5435d119bcd657d9af568c60692fb748c19ae93aae257bbfda3612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869324803349140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmpft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddc57ae-4f38-4fd3-aa82
-5552ba727d88,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143,PodSandboxId:ad350345059c3492da2c02f8e20182d914adedbf18a1664949f3f48720490f98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869321241027158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be64182f68d88a91e8f5a225d2d1d695,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342,PodSandboxId:9e3e4c6602381bacf00a6cd2c0d9959ad0ee129416447c787a729a1fbd6673c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869321228710657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf8dc78d48c852701ab852fe447b50,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286,PodSandboxId:1489f217ae6f57711d39b42e40ad1ea0982809ba36be08c8eecb2ecc826c523d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869321222829800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fa28440663f54746801eb6a944d
ea8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12,PodSandboxId:cc29489220c3640aede6abf03ade44624e43725cb746d7994be3c5d45eeb7111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869321208847863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b3be8f1c9d0b215d7fcb36c3f9a97
6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc0cafaf-c0c9-45f0-b3c3-7708b9c7c27c name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.073944292Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0034f121-3b8b-41c2-bd8d-4a7f4033a1bf name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.074012352Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0034f121-3b8b-41c2-bd8d-4a7f4033a1bf name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.074874138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12691e8a-6fc8-4adf-94cb-bcab97ba1ccd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.075275363Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870618075247563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12691e8a-6fc8-4adf-94cb-bcab97ba1ccd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.075764906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c341c41-0e55-4aa8-9e20-831412d07e08 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.075815499Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c341c41-0e55-4aa8-9e20-831412d07e08 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:38 default-k8s-diff-port-640552 crio[702]: time="2024-08-28 18:43:38.076008313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869355659557944,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e194ddce09e9049094d2848c807efa8388a9e0e07a1f1b2e4bd4bcb33e5f5ea,PodSandboxId:d007dc2e2c3a31bb7df7222f791e9126fa7ee1311a769dfdb4d08503b02e7b0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869335696268301,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f826550a-fcfa-4f39-9c73-44834e6e4721,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db,PodSandboxId:ae6e975f4de6504de0ce883436df054f3c65a21194f003f6049cbc88b36f6e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869332522555962,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-t5lx6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63a7dcfb-266b-4eb2-bdfb-e8153da41df1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80,PodSandboxId:6cac35685c17caf959903e61b5aab4beef7b9e37b7d24c0ab3b444a1674e1085,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869324800568946,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 26468a47-d594-4b6c-823b-aea49a222f68,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee,PodSandboxId:9d02549c1f5435d119bcd657d9af568c60692fb748c19ae93aae257bbfda3612,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869324803349140,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lmpft,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cddc57ae-4f38-4fd3-aa82
-5552ba727d88,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143,PodSandboxId:ad350345059c3492da2c02f8e20182d914adedbf18a1664949f3f48720490f98,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869321241027158,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be64182f68d88a91e8f5a225d2d1d695,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342,PodSandboxId:9e3e4c6602381bacf00a6cd2c0d9959ad0ee129416447c787a729a1fbd6673c8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869321228710657,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edf8dc78d48c852701ab852fe447b50,},Annotations:map[str
ing]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286,PodSandboxId:1489f217ae6f57711d39b42e40ad1ea0982809ba36be08c8eecb2ecc826c523d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869321222829800,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26fa28440663f54746801eb6a944d
ea8,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12,PodSandboxId:cc29489220c3640aede6abf03ade44624e43725cb746d7994be3c5d45eeb7111,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869321208847863,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-640552,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96b3be8f1c9d0b215d7fcb36c3f9a97
6,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c341c41-0e55-4aa8-9e20-831412d07e08 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	02d2a37fd69e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   6cac35685c17c       storage-provisioner
	9e194ddce09e9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   d007dc2e2c3a3       busybox
	93284522e6de6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   ae6e975f4de65       coredns-6f6b679f8f-t5lx6
	729f7a235e3df       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      21 minutes ago      Running             kube-proxy                1                   9d02549c1f543       kube-proxy-lmpft
	48533565061e0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   6cac35685c17c       storage-provisioner
	3895a4d3fb7d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   ad350345059c3       etcd-default-k8s-diff-port-640552
	d4b3a88fe2356       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      21 minutes ago      Running             kube-apiserver            1                   9e3e4c6602381       kube-apiserver-default-k8s-diff-port-640552
	1d1212a86ca9a       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      21 minutes ago      Running             kube-controller-manager   1                   1489f217ae6f5       kube-controller-manager-default-k8s-diff-port-640552
	101c4701cc860       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      21 minutes ago      Running             kube-scheduler            1                   cc29489220c36       kube-scheduler-default-k8s-diff-port-640552
	
	
	==> coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47719 - 32955 "HINFO IN 72317959396472030.9198756957633981570. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.01057349s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-640552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-640552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=default-k8s-diff-port-640552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T18_14_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 18:14:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-640552
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 18:43:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 18:43:00 +0000   Wed, 28 Aug 2024 18:14:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 18:43:00 +0000   Wed, 28 Aug 2024 18:14:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 18:43:00 +0000   Wed, 28 Aug 2024 18:14:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 18:43:00 +0000   Wed, 28 Aug 2024 18:22:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.226
	  Hostname:    default-k8s-diff-port-640552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 53c2b777c7454982bb99ff6c37b0f2c6
	  System UUID:                53c2b777-c745-4982-bb99-ff6c37b0f2c6
	  Boot ID:                    4d8cbfc2-df06-4ef4-b068-829fcdbebf68
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 coredns-6f6b679f8f-t5lx6                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-640552                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-640552             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-640552    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-lmpft                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-640552             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-lccm2                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-640552 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-640552 event: Registered Node default-k8s-diff-port-640552 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-640552 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-640552 event: Registered Node default-k8s-diff-port-640552 in Controller
	
	
	==> dmesg <==
	[Aug28 18:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052887] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044537] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.839421] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.912289] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.536202] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.263565] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.063267] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.049615] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.193764] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.119487] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.270133] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[  +4.006864] systemd-fstab-generator[785]: Ignoring "noauto" option for root device
	[  +1.798911] systemd-fstab-generator[904]: Ignoring "noauto" option for root device
	[  +0.059563] kauditd_printk_skb: 158 callbacks suppressed
	[Aug28 18:22] kauditd_printk_skb: 69 callbacks suppressed
	[  +2.453029] systemd-fstab-generator[1538]: Ignoring "noauto" option for root device
	[  +3.255197] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.279446] kauditd_printk_skb: 44 callbacks suppressed
	
	
	==> etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] <==
	{"level":"info","ts":"2024-08-28T18:32:03.031903Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1483018848,"revision":822,"compact-revision":-1}
	{"level":"info","ts":"2024-08-28T18:37:03.028350Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1064}
	{"level":"info","ts":"2024-08-28T18:37:03.032388Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1064,"took":"3.750533ms","hash":1728675828,"current-db-size-bytes":2715648,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1687552,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-28T18:37:03.032438Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1728675828,"revision":1064,"compact-revision":822}
	{"level":"info","ts":"2024-08-28T18:42:03.036709Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1307}
	{"level":"info","ts":"2024-08-28T18:42:03.040325Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1307,"took":"3.285808ms","hash":2001707713,"current-db-size-bytes":2715648,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1654784,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-28T18:42:03.040376Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2001707713,"revision":1307,"compact-revision":1064}
	{"level":"warn","ts":"2024-08-28T18:42:15.699440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.67851ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T18:42:15.699518Z","caller":"traceutil/trace.go:171","msg":"trace[1418180950] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1560; }","duration":"130.837854ms","start":"2024-08-28T18:42:15.568661Z","end":"2024-08-28T18:42:15.699499Z","steps":["trace[1418180950] 'range keys from in-memory index tree'  (duration: 130.652824ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T18:42:15.700631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.745751ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9883027998857465470 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.226\" mod_revision:1553 > success:<request_put:<key:\"/registry/masterleases/192.168.39.226\" value_size:67 lease:659655962002689660 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.226\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-28T18:42:15.700763Z","caller":"traceutil/trace.go:171","msg":"trace[671742700] transaction","detail":"{read_only:false; response_revision:1561; number_of_response:1; }","duration":"253.374759ms","start":"2024-08-28T18:42:15.447370Z","end":"2024-08-28T18:42:15.700745Z","steps":["trace[671742700] 'process raft request'  (duration: 127.602456ms)","trace[671742700] 'compare'  (duration: 124.642047ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T18:43:07.792998Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9883027998857465788,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-08-28T18:43:07.794833Z","caller":"traceutil/trace.go:171","msg":"trace[949474764] transaction","detail":"{read_only:false; response_revision:1604; number_of_response:1; }","duration":"702.403501ms","start":"2024-08-28T18:43:07.092416Z","end":"2024-08-28T18:43:07.794819Z","steps":["trace[949474764] 'process raft request'  (duration: 702.309764ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T18:43:07.794957Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T18:43:07.092390Z","time spent":"702.500524ms","remote":"127.0.0.1:46906","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1602 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-28T18:43:07.795240Z","caller":"traceutil/trace.go:171","msg":"trace[893837689] linearizableReadLoop","detail":"{readStateIndex:1897; appliedIndex:1896; }","duration":"502.499229ms","start":"2024-08-28T18:43:07.292722Z","end":"2024-08-28T18:43:07.795222Z","steps":["trace[893837689] 'read index received'  (duration: 501.926351ms)","trace[893837689] 'applied index is now lower than readState.Index'  (duration: 569.065µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T18:43:07.795502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"377.614468ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T18:43:07.795547Z","caller":"traceutil/trace.go:171","msg":"trace[917158544] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1604; }","duration":"377.664181ms","start":"2024-08-28T18:43:07.417872Z","end":"2024-08-28T18:43:07.795536Z","steps":["trace[917158544] 'agreement among raft nodes before linearized reading'  (duration: 377.593728ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T18:43:07.795576Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T18:43:07.417822Z","time spent":"377.747571ms","remote":"127.0.0.1:47188","response type":"/etcdserverpb.KV/Range","request count":0,"request size":90,"response count":0,"response size":28,"request content":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true "}
	{"level":"warn","ts":"2024-08-28T18:43:07.795640Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"502.797741ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T18:43:07.795748Z","caller":"traceutil/trace.go:171","msg":"trace[329363384] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1604; }","duration":"503.020066ms","start":"2024-08-28T18:43:07.292719Z","end":"2024-08-28T18:43:07.795739Z","steps":["trace[329363384] 'agreement among raft nodes before linearized reading'  (duration: 502.733722ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T18:43:07.795880Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T18:43:07.292685Z","time spent":"503.185368ms","remote":"127.0.0.1:46694","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-28T18:43:07.795683Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.525287ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T18:43:07.796146Z","caller":"traceutil/trace.go:171","msg":"trace[187399127] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1604; }","duration":"233.983483ms","start":"2024-08-28T18:43:07.562153Z","end":"2024-08-28T18:43:07.796137Z","steps":["trace[187399127] 'agreement among raft nodes before linearized reading'  (duration: 233.517133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T18:43:08.050317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.577364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T18:43:08.050469Z","caller":"traceutil/trace.go:171","msg":"trace[166519006] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1604; }","duration":"100.771685ms","start":"2024-08-28T18:43:07.949671Z","end":"2024-08-28T18:43:08.050443Z","steps":["trace[166519006] 'count revisions from in-memory index tree'  (duration: 100.480812ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:43:38 up 22 min,  0 users,  load average: 0.38, 0.32, 0.17
	Linux default-k8s-diff-port-640552 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] <==
	I0828 18:40:05.236743       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:40:05.236808       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:42:04.235533       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:42:04.235684       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0828 18:42:05.238328       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:42:05.238385       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0828 18:42:05.238328       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:42:05.238467       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:42:05.239520       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:42:05.239542       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:43:05.240667       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:43:05.240781       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0828 18:43:05.240911       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:43:05.241010       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:43:05.241983       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:43:05.243172       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] <==
	I0828 18:38:28.451342       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="329.052µs"
	E0828 18:38:37.963601       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:38:38.419481       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:38:40.449662       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="181.565µs"
	E0828 18:39:07.970247       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:39:08.428593       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:39:37.975776       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:39:38.436062       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:40:07.984956       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:40:08.445055       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:40:37.991588       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:40:38.453204       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:41:07.998644       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:41:08.460728       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:41:38.004601       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:41:38.470500       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:42:08.014038       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:42:08.479236       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:42:38.021048       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:42:38.486520       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:43:00.043400       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-640552"
	E0828 18:43:08.027773       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:43:08.494990       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:43:32.458622       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="266.771µs"
	E0828 18:43:38.035540       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 18:22:04.977185       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 18:22:04.988102       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.226"]
	E0828 18:22:04.988231       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 18:22:05.016626       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 18:22:05.016668       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 18:22:05.016693       1 server_linux.go:169] "Using iptables Proxier"
	I0828 18:22:05.019331       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 18:22:05.019617       1 server.go:483] "Version info" version="v1.31.0"
	I0828 18:22:05.019629       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:22:05.020975       1 config.go:197] "Starting service config controller"
	I0828 18:22:05.021051       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 18:22:05.021098       1 config.go:104] "Starting endpoint slice config controller"
	I0828 18:22:05.021134       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 18:22:05.021934       1 config.go:326] "Starting node config controller"
	I0828 18:22:05.023571       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 18:22:05.122391       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 18:22:05.122514       1 shared_informer.go:320] Caches are synced for service config
	I0828 18:22:05.125378       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] <==
	I0828 18:22:02.102035       1 serving.go:386] Generated self-signed cert in-memory
	W0828 18:22:04.214002       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 18:22:04.214137       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 18:22:04.214167       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 18:22:04.214220       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 18:22:04.243756       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0828 18:22:04.243877       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:22:04.246437       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 18:22:04.246521       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 18:22:04.247885       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0828 18:22:04.248557       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0828 18:22:04.347268       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 18:42:39 default-k8s-diff-port-640552 kubelet[911]: E0828 18:42:39.818144     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870559817648472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:39 default-k8s-diff-port-640552 kubelet[911]: E0828 18:42:39.818188     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870559817648472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:43 default-k8s-diff-port-640552 kubelet[911]: E0828 18:42:43.435394     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lccm2" podUID="a8729f4d-7653-42f2-bcdc-0b95f4aa7080"
	Aug 28 18:42:49 default-k8s-diff-port-640552 kubelet[911]: E0828 18:42:49.821800     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870569821006682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:49 default-k8s-diff-port-640552 kubelet[911]: E0828 18:42:49.822139     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870569821006682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:57 default-k8s-diff-port-640552 kubelet[911]: E0828 18:42:57.438476     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lccm2" podUID="a8729f4d-7653-42f2-bcdc-0b95f4aa7080"
	Aug 28 18:42:59 default-k8s-diff-port-640552 kubelet[911]: E0828 18:42:59.454708     911 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 18:42:59 default-k8s-diff-port-640552 kubelet[911]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 18:42:59 default-k8s-diff-port-640552 kubelet[911]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 18:42:59 default-k8s-diff-port-640552 kubelet[911]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 18:42:59 default-k8s-diff-port-640552 kubelet[911]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 18:42:59 default-k8s-diff-port-640552 kubelet[911]: E0828 18:42:59.824660     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870579824026251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:59 default-k8s-diff-port-640552 kubelet[911]: E0828 18:42:59.824708     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870579824026251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:43:09 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:09.435867     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lccm2" podUID="a8729f4d-7653-42f2-bcdc-0b95f4aa7080"
	Aug 28 18:43:09 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:09.826787     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870589825841706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:43:09 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:09.826968     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870589825841706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:43:19 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:19.828878     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870599828014000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:43:19 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:19.829235     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870599828014000,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:43:20 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:20.448336     911 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 28 18:43:20 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:20.448472     911 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 28 18:43:20 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:20.448840     911 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-whx5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-lccm2_kube-system(a8729f4d-7653-42f2-bcdc-0b95f4aa7080): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 28 18:43:20 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:20.450497     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-lccm2" podUID="a8729f4d-7653-42f2-bcdc-0b95f4aa7080"
	Aug 28 18:43:29 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:29.832027     911 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870609831458014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:43:29 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:29.832068     911 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870609831458014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:43:32 default-k8s-diff-port-640552 kubelet[911]: E0828 18:43:32.435764     911 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-lccm2" podUID="a8729f4d-7653-42f2-bcdc-0b95f4aa7080"
	
	
	==> storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] <==
	I0828 18:22:35.755962       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 18:22:35.765929       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 18:22:35.765994       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 18:22:53.168809       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 18:22:53.169983       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8a2adc2-ab3a-4591-a40e-ec62266e56ac", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-640552_465f6b51-f0c6-437b-8c88-cbba8bf75686 became leader
	I0828 18:22:53.170160       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-640552_465f6b51-f0c6-437b-8c88-cbba8bf75686!
	I0828 18:22:53.270678       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-640552_465f6b51-f0c6-437b-8c88-cbba8bf75686!
	
	
	==> storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] <==
	I0828 18:22:04.906425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0828 18:22:34.908831       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-640552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-lccm2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-640552 describe pod metrics-server-6867b74b74-lccm2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-640552 describe pod metrics-server-6867b74b74-lccm2: exit status 1 (57.902135ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-lccm2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-640552 describe pod metrics-server-6867b74b74-lccm2: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (484.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (436.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-014980 -n embed-certs-014980
E0828 18:43:09.993558   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-28 18:43:10.107280591 +0000 UTC m=+6709.157842565
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-014980 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-014980 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.682µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-014980 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-014980 -n embed-certs-014980
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-014980 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-014980 logs -n 25: (1.187879812s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:14 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-072854             | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-014980            | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-640552  | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-072854                  | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC | 28 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-131737        | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-014980                 | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-640552       | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-131737             | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:41 UTC | 28 Aug 24 18:41 UTC |
	| start   | -p newest-cni-835349 --memory=2200 --alsologtostderr   | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:41 UTC | 28 Aug 24 18:42 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC | 28 Aug 24 18:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-341028 | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC | 28 Aug 24 18:42 UTC |
	|         | disable-driver-mounts-341028                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-835349             | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC | 28 Aug 24 18:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-835349                                   | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC | 28 Aug 24 18:42 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-835349                  | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC | 28 Aug 24 18:42 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-835349 --memory=2200 --alsologtostderr   | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:42 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 18:42:40
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 18:42:40.288520   84345 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:42:40.288629   84345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:42:40.288640   84345 out.go:358] Setting ErrFile to fd 2...
	I0828 18:42:40.288647   84345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:42:40.288859   84345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:42:40.289411   84345 out.go:352] Setting JSON to false
	I0828 18:42:40.290417   84345 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8706,"bootTime":1724861854,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:42:40.290486   84345 start.go:139] virtualization: kvm guest
	I0828 18:42:40.292544   84345 out.go:177] * [newest-cni-835349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:42:40.293877   84345 notify.go:220] Checking for updates...
	I0828 18:42:40.293899   84345 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:42:40.295165   84345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:42:40.296239   84345 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:42:40.297389   84345 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:42:40.298464   84345 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:42:40.299455   84345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:42:40.300829   84345 config.go:182] Loaded profile config "newest-cni-835349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:42:40.301223   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:42:40.301281   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:42:40.316802   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I0828 18:42:40.317197   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:42:40.317713   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:42:40.317736   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:42:40.318103   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:42:40.318325   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:40.318579   84345 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:42:40.318851   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:42:40.318883   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:42:40.333400   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I0828 18:42:40.333834   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:42:40.334362   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:42:40.334395   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:42:40.334765   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:42:40.334954   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:40.371608   84345 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 18:42:40.372752   84345 start.go:297] selected driver: kvm2
	I0828 18:42:40.372773   84345 start.go:901] validating driver "kvm2" against &{Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:42:40.372899   84345 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:42:40.373590   84345 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:42:40.373655   84345 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:42:40.388558   84345 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:42:40.388950   84345 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0828 18:42:40.389020   84345 cni.go:84] Creating CNI manager for ""
	I0828 18:42:40.389036   84345 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:42:40.389084   84345 start.go:340] cluster config:
	{Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-835349 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:42:40.389209   84345 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:42:40.391037   84345 out.go:177] * Starting "newest-cni-835349" primary control-plane node in "newest-cni-835349" cluster
	I0828 18:42:40.392145   84345 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:42:40.392175   84345 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:42:40.392181   84345 cache.go:56] Caching tarball of preloaded images
	I0828 18:42:40.392272   84345 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:42:40.392285   84345 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 18:42:40.392387   84345 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/config.json ...
	I0828 18:42:40.392558   84345 start.go:360] acquireMachinesLock for newest-cni-835349: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:42:40.392601   84345 start.go:364] duration metric: took 24.588µs to acquireMachinesLock for "newest-cni-835349"
	I0828 18:42:40.392616   84345 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:42:40.392631   84345 fix.go:54] fixHost starting: 
	I0828 18:42:40.392898   84345 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:42:40.392939   84345 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:42:40.407254   84345 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40809
	I0828 18:42:40.407666   84345 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:42:40.408077   84345 main.go:141] libmachine: Using API Version  1
	I0828 18:42:40.408095   84345 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:42:40.408429   84345 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:42:40.408605   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:40.408778   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetState
	I0828 18:42:40.410257   84345 fix.go:112] recreateIfNeeded on newest-cni-835349: state=Stopped err=<nil>
	I0828 18:42:40.410297   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	W0828 18:42:40.410465   84345 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:42:40.412487   84345 out.go:177] * Restarting existing kvm2 VM for "newest-cni-835349" ...
	I0828 18:42:40.413859   84345 main.go:141] libmachine: (newest-cni-835349) Calling .Start
	I0828 18:42:40.414029   84345 main.go:141] libmachine: (newest-cni-835349) Ensuring networks are active...
	I0828 18:42:40.414975   84345 main.go:141] libmachine: (newest-cni-835349) Ensuring network default is active
	I0828 18:42:40.415308   84345 main.go:141] libmachine: (newest-cni-835349) Ensuring network mk-newest-cni-835349 is active
	I0828 18:42:40.415756   84345 main.go:141] libmachine: (newest-cni-835349) Getting domain xml...
	I0828 18:42:40.416466   84345 main.go:141] libmachine: (newest-cni-835349) Creating domain...
	I0828 18:42:41.643666   84345 main.go:141] libmachine: (newest-cni-835349) Waiting to get IP...
	I0828 18:42:41.644576   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:41.645049   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:41.645096   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:41.645013   84380 retry.go:31] will retry after 261.688627ms: waiting for machine to come up
	I0828 18:42:41.908525   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:41.909063   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:41.909096   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:41.909010   84380 retry.go:31] will retry after 273.446367ms: waiting for machine to come up
	I0828 18:42:42.184438   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:42.184942   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:42.184964   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:42.184890   84380 retry.go:31] will retry after 385.016034ms: waiting for machine to come up
	I0828 18:42:42.571427   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:42.571875   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:42.571907   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:42.571821   84380 retry.go:31] will retry after 409.149804ms: waiting for machine to come up
	I0828 18:42:42.982309   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:42.982802   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:42.982823   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:42.982771   84380 retry.go:31] will retry after 743.553719ms: waiting for machine to come up
	I0828 18:42:43.727664   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:43.728153   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:43.728178   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:43.728113   84380 retry.go:31] will retry after 587.31043ms: waiting for machine to come up
	I0828 18:42:44.316697   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:44.317200   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:44.317227   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:44.317141   84380 retry.go:31] will retry after 934.216078ms: waiting for machine to come up
	I0828 18:42:45.253352   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:45.253911   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:45.253936   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:45.253865   84380 retry.go:31] will retry after 1.088835525s: waiting for machine to come up
	I0828 18:42:46.344716   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:46.345216   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:46.345246   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:46.345168   84380 retry.go:31] will retry after 1.716287117s: waiting for machine to come up
	I0828 18:42:48.063044   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:48.063482   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:48.063511   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:48.063439   84380 retry.go:31] will retry after 1.549324706s: waiting for machine to come up
	I0828 18:42:49.615165   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:49.615635   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:49.615664   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:49.615575   84380 retry.go:31] will retry after 2.003187438s: waiting for machine to come up
	I0828 18:42:51.620638   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:51.621074   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:51.621100   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:51.621025   84380 retry.go:31] will retry after 3.445816523s: waiting for machine to come up
	I0828 18:42:55.068243   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:55.068716   84345 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:55.068748   84345 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:55.068673   84380 retry.go:31] will retry after 3.263238671s: waiting for machine to come up
	I0828 18:42:58.335793   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.336295   84345 main.go:141] libmachine: (newest-cni-835349) Found IP for machine: 192.168.50.179
	I0828 18:42:58.336349   84345 main.go:141] libmachine: (newest-cni-835349) Reserving static IP address...
	I0828 18:42:58.336365   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has current primary IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.336868   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "newest-cni-835349", mac: "52:54:00:53:3a:ba", ip: "192.168.50.179"} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.336911   84345 main.go:141] libmachine: (newest-cni-835349) Reserved static IP address: 192.168.50.179
	I0828 18:42:58.336932   84345 main.go:141] libmachine: (newest-cni-835349) DBG | skip adding static IP to network mk-newest-cni-835349 - found existing host DHCP lease matching {name: "newest-cni-835349", mac: "52:54:00:53:3a:ba", ip: "192.168.50.179"}
	I0828 18:42:58.336958   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Getting to WaitForSSH function...
	I0828 18:42:58.336979   84345 main.go:141] libmachine: (newest-cni-835349) Waiting for SSH to be available...
	I0828 18:42:58.339449   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.339876   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.339906   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.340067   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Using SSH client type: external
	I0828 18:42:58.340091   84345 main.go:141] libmachine: (newest-cni-835349) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa (-rw-------)
	I0828 18:42:58.340140   84345 main.go:141] libmachine: (newest-cni-835349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:42:58.340156   84345 main.go:141] libmachine: (newest-cni-835349) DBG | About to run SSH command:
	I0828 18:42:58.340174   84345 main.go:141] libmachine: (newest-cni-835349) DBG | exit 0
	I0828 18:42:58.462048   84345 main.go:141] libmachine: (newest-cni-835349) DBG | SSH cmd err, output: <nil>: 
	I0828 18:42:58.462372   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetConfigRaw
	I0828 18:42:58.462985   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:42:58.465100   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.465464   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.465498   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.465703   84345 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/config.json ...
	I0828 18:42:58.465890   84345 machine.go:93] provisionDockerMachine start ...
	I0828 18:42:58.465911   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:58.466145   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:58.468355   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.468750   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.468795   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.468847   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:58.469021   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.469178   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.469297   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:58.469486   84345 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:58.469663   84345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:58.469672   84345 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:42:58.566455   84345 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:42:58.566488   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetMachineName
	I0828 18:42:58.566777   84345 buildroot.go:166] provisioning hostname "newest-cni-835349"
	I0828 18:42:58.566806   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetMachineName
	I0828 18:42:58.566991   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:58.569678   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.570031   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.570061   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.570214   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:58.570404   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.570561   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.570697   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:58.570955   84345 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:58.571156   84345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:58.571173   84345 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-835349 && echo "newest-cni-835349" | sudo tee /etc/hostname
	I0828 18:42:58.679405   84345 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-835349
	
	I0828 18:42:58.679441   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:58.682125   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.682477   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.682502   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.682668   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:58.682838   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.682999   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.683108   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:58.683303   84345 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:58.683457   84345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:58.683473   84345 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-835349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-835349/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-835349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:42:58.790240   84345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:42:58.790270   84345 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:42:58.790292   84345 buildroot.go:174] setting up certificates
	I0828 18:42:58.790308   84345 provision.go:84] configureAuth start
	I0828 18:42:58.790320   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetMachineName
	I0828 18:42:58.790653   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:42:58.793453   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.793847   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.793877   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.794044   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:58.796517   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.796900   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.796932   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.797049   84345 provision.go:143] copyHostCerts
	I0828 18:42:58.797110   84345 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:42:58.797132   84345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:42:58.797212   84345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:42:58.797383   84345 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:42:58.797394   84345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:42:58.797439   84345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:42:58.797550   84345 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:42:58.797561   84345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:42:58.797600   84345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:42:58.797684   84345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.newest-cni-835349 san=[127.0.0.1 192.168.50.179 localhost minikube newest-cni-835349]
	I0828 18:42:58.887168   84345 provision.go:177] copyRemoteCerts
	I0828 18:42:58.887220   84345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:42:58.887246   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:58.889749   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.890048   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:58.890105   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:58.890261   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:58.890434   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:58.890590   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:58.890768   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:58.967818   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:42:58.990102   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0828 18:42:59.013763   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:42:59.036731   84345 provision.go:87] duration metric: took 246.412579ms to configureAuth
	I0828 18:42:59.036757   84345 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:42:59.036968   84345 config.go:182] Loaded profile config "newest-cni-835349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:42:59.037100   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:59.039916   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.040274   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.040314   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.040484   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:59.040730   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.040901   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.041031   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:59.041190   84345 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:59.041409   84345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:59.041432   84345 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:42:59.253819   84345 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:42:59.253847   84345 machine.go:96] duration metric: took 787.945536ms to provisionDockerMachine
	I0828 18:42:59.253859   84345 start.go:293] postStartSetup for "newest-cni-835349" (driver="kvm2")
	I0828 18:42:59.253898   84345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:42:59.253917   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:59.254256   84345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:42:59.254283   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:59.256843   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.257105   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.257144   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.257306   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:59.257533   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.257707   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:59.257825   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:59.336860   84345 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:42:59.341675   84345 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:42:59.341704   84345 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:42:59.341768   84345 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:42:59.341877   84345 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:42:59.341992   84345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:42:59.351114   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:42:59.373583   84345 start.go:296] duration metric: took 119.70869ms for postStartSetup
	I0828 18:42:59.373638   84345 fix.go:56] duration metric: took 18.981012092s for fixHost
	I0828 18:42:59.373664   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:59.376250   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.376600   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.376636   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.376806   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:59.377019   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.377185   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.377356   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:59.377550   84345 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:59.377739   84345 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:59.377750   84345 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:42:59.474399   84345 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724870579.434596442
	
	I0828 18:42:59.474420   84345 fix.go:216] guest clock: 1724870579.434596442
	I0828 18:42:59.474428   84345 fix.go:229] Guest: 2024-08-28 18:42:59.434596442 +0000 UTC Remote: 2024-08-28 18:42:59.373643401 +0000 UTC m=+19.120583395 (delta=60.953041ms)
	I0828 18:42:59.474447   84345 fix.go:200] guest clock delta is within tolerance: 60.953041ms
	I0828 18:42:59.474461   84345 start.go:83] releasing machines lock for "newest-cni-835349", held for 19.081852477s
	I0828 18:42:59.474479   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:59.474739   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:42:59.477422   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.477745   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.477776   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.477867   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:59.478338   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:59.478518   84345 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:59.478610   84345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:42:59.478663   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:59.478723   84345 ssh_runner.go:195] Run: cat /version.json
	I0828 18:42:59.478748   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:59.481237   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.481584   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.481608   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.481627   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.481768   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:59.481954   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.482066   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:59.482093   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:59.482106   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:59.482287   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:59.482292   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:59.482473   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:59.482639   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:59.482805   84345 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:59.594061   84345 ssh_runner.go:195] Run: systemctl --version
	I0828 18:42:59.600110   84345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:42:59.740000   84345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:42:59.745780   84345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:42:59.745843   84345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:42:59.761529   84345 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:42:59.761551   84345 start.go:495] detecting cgroup driver to use...
	I0828 18:42:59.761617   84345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:42:59.777658   84345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:42:59.791169   84345 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:42:59.791218   84345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:42:59.804618   84345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:42:59.817494   84345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:42:59.932207   84345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:43:00.092809   84345 docker.go:233] disabling docker service ...
	I0828 18:43:00.092914   84345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:43:00.106715   84345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:43:00.119540   84345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:43:00.226683   84345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:43:00.345919   84345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:43:00.359139   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:43:00.375915   84345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:43:00.375972   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.385221   84345 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:43:00.385285   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.394715   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.404210   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.414289   84345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:43:00.424754   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.435023   84345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.451963   84345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:43:00.462132   84345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:43:00.471706   84345 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:43:00.471765   84345 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:43:00.485233   84345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:43:00.494526   84345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:43:00.605408   84345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:43:00.695412   84345 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:43:00.695487   84345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:43:00.699894   84345 start.go:563] Will wait 60s for crictl version
	I0828 18:43:00.699948   84345 ssh_runner.go:195] Run: which crictl
	I0828 18:43:00.703281   84345 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:43:00.739959   84345 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:43:00.740060   84345 ssh_runner.go:195] Run: crio --version
	I0828 18:43:00.766683   84345 ssh_runner.go:195] Run: crio --version
	I0828 18:43:00.796928   84345 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:43:00.798223   84345 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:43:00.800754   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:00.801014   84345 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:42:50 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:43:00.801045   84345 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:43:00.801251   84345 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:43:00.805066   84345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:43:00.818774   84345 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0828 18:43:00.819917   84345 kubeadm.go:883] updating cluster {Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:
6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:43:00.820036   84345 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:43:00.820100   84345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:43:00.856136   84345 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:43:00.856206   84345 ssh_runner.go:195] Run: which lz4
	I0828 18:43:00.859828   84345 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:43:00.863554   84345 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:43:00.863579   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:43:02.079244   84345 crio.go:462] duration metric: took 1.219446174s to copy over tarball
	I0828 18:43:02.079330   84345 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:43:04.178656   84345 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.099279079s)
	I0828 18:43:04.178693   84345 crio.go:469] duration metric: took 2.099414937s to extract the tarball
	I0828 18:43:04.178703   84345 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:43:04.217298   84345 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:43:04.266008   84345 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:43:04.266031   84345 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:43:04.266039   84345 kubeadm.go:934] updating node { 192.168.50.179 8443 v1.31.0 crio true true} ...
	I0828 18:43:04.266194   84345 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-835349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:43:04.266288   84345 ssh_runner.go:195] Run: crio config
	I0828 18:43:04.314721   84345 cni.go:84] Creating CNI manager for ""
	I0828 18:43:04.314740   84345 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:43:04.314749   84345 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0828 18:43:04.314772   84345 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.179 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-835349 NodeName:newest-cni-835349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:43:04.314961   84345 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-835349"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:43:04.315039   84345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:43:04.326490   84345 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:43:04.326558   84345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:43:04.336465   84345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0828 18:43:04.353990   84345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:43:04.370747   84345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0828 18:43:04.387487   84345 ssh_runner.go:195] Run: grep 192.168.50.179	control-plane.minikube.internal$ /etc/hosts
	I0828 18:43:04.391098   84345 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:43:04.402869   84345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:43:04.524434   84345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:43:04.549616   84345 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349 for IP: 192.168.50.179
	I0828 18:43:04.549640   84345 certs.go:194] generating shared ca certs ...
	I0828 18:43:04.549662   84345 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:04.549830   84345 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:43:04.549885   84345 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:43:04.549899   84345 certs.go:256] generating profile certs ...
	I0828 18:43:04.549996   84345 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/client.key
	I0828 18:43:04.550088   84345 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.key.0d40501c
	I0828 18:43:04.550147   84345 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.key
	I0828 18:43:04.550287   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:43:04.550318   84345 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:43:04.550328   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:43:04.550363   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:43:04.550405   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:43:04.550451   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:43:04.550556   84345 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:43:04.551378   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:43:04.607537   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:43:04.640395   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:43:04.676623   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:43:04.713030   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 18:43:04.736395   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 18:43:04.760024   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:43:04.785509   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:43:04.809605   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:43:04.832771   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:43:04.855465   84345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:43:04.879459   84345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:43:04.895101   84345 ssh_runner.go:195] Run: openssl version
	I0828 18:43:04.900559   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:43:04.910869   84345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:43:04.914964   84345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:43:04.915019   84345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:43:04.920727   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:43:04.932458   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:43:04.943845   84345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:43:04.948500   84345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:43:04.948563   84345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:43:04.954037   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:43:04.964951   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:43:04.974931   84345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:43:04.979256   84345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:43:04.979315   84345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:43:04.984797   84345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:43:04.994940   84345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:43:04.999323   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:43:05.005085   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:43:05.010924   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:43:05.016560   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:43:05.022100   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:43:05.027387   84345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:43:05.032655   84345 kubeadm.go:392] StartCluster: {Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0
s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:43:05.032738   84345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:43:05.032775   84345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:43:05.069173   84345 cri.go:89] found id: ""
	I0828 18:43:05.069252   84345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:43:05.079874   84345 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:43:05.079904   84345 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:43:05.079956   84345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:43:05.089635   84345 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:43:05.090509   84345 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-835349" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:43:05.091095   84345 kubeconfig.go:62] /home/jenkins/minikube-integration/19529-10317/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-835349" cluster setting kubeconfig missing "newest-cni-835349" context setting]
	I0828 18:43:05.091990   84345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:43:05.093668   84345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:43:05.104043   84345 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.179
	I0828 18:43:05.104075   84345 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:43:05.104086   84345 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:43:05.104129   84345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:43:05.149020   84345 cri.go:89] found id: ""
	I0828 18:43:05.149096   84345 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:43:05.165415   84345 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:43:05.174673   84345 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:43:05.174694   84345 kubeadm.go:157] found existing configuration files:
	
	I0828 18:43:05.174738   84345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:43:05.183716   84345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:43:05.183787   84345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:43:05.192521   84345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:43:05.200837   84345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:43:05.200899   84345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:43:05.211883   84345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:43:05.221422   84345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:43:05.221481   84345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:43:05.231366   84345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:43:05.239908   84345 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:43:05.239980   84345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:43:05.248516   84345 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:43:05.257366   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:43:05.365457   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:43:06.202132   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:43:06.392657   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:43:06.469239   84345 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:43:06.560092   84345 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:43:06.560237   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:07.061270   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:07.560526   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:08.060341   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:08.560458   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:09.060260   84345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:43:09.074008   84345 api_server.go:72] duration metric: took 2.513928542s to wait for apiserver process to appear ...
	I0828 18:43:09.074038   84345 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:43:09.074062   84345 api_server.go:253] Checking apiserver healthz at https://192.168.50.179:8443/healthz ...
	
	
	==> CRI-O <==
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.705171724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870590705149646,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2179a258-417f-43dc-b967-9221311978a8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.705880150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d8b9a3c-ebe1-4ccd-bee2-18d173089e0c name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.705938128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d8b9a3c-ebe1-4ccd-bee2-18d173089e0c name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.706132041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03453e02aa9963c458c74885933fadfdd4a6e0e674b401d0361eb3fdddaa3f7a,PodSandboxId:b36f0b3836447f5dfa26944f9a6b103e7d8ddce00e71ea4c0a99ee66f18ad845,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869603101176033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e09413-b695-420e-bf45-1f8f40ff7d05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e333800301f7f5da0b122571c15d46079b6c961b10e512cf03cf8bf22d3cb8c1,PodSandboxId:61e4b445685f3aadbb896849a4708aab7b1c419cdabe2087303eedc82d6718c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602024719344,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cz29x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd89ac5c-011e-4810-b681-fae999af2b6b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba7772bf1d640f5f4d72968ec1a1116f54245ab8b771d59b998c205ec11cb27,PodSandboxId:c3e3e300f9423a39398404417b02e02644ab23c0261df4a3ae93b58bd5496836,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602025001342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-djjbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
c3e4fc9-c257-40c5-bee2-6ad7335e8bf8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8726422898139a9becd3740f00b52f0615c668e67a93e3afcd345f69ab56174,PodSandboxId:3ce46a9948a7b50a9b5612fdb22b9467b2c2a2c1ee2a4af3b317da6a834d5f43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724869601331066351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzw4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46e7805-0395-40ae-92e6-ab43eb4b2b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4e7e2bb458e745c253692660330618c1fee23d63e5a177ac89d5371d6b6a87,PodSandboxId:4b4e3ea46ba40410cffa36c45bba7e65f368b981d6ec0267d239398307a0c7aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869590515966426,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be8df2afc75f1ee8c35748a5ed7b7b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2026b2021e7a1ef7de3865619a82c63953a5be5296464d79ccfb3ab1ac6a17,PodSandboxId:0057f243a3f39819998caf62a3c9331f5e175e2daa0d5aa1f52841f33e9a4541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869590513114880,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bca478b04e382b536c96c7dc6610af,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c7c0076722fa61714fa39f3164f13b568bcc53d70189f01180f8d897be448d,PodSandboxId:33f16483fba1945d5606cfdf50b4e677dca862c716421ce82d155dfad4756a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869590438408124,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dfa18e75e434576541ce5bd5918f485703f75fd26a44c32607e7389b3cf8ef,PodSandboxId:61d32464e573483e9c6f06a3357c9bf30043da9e4deea79b0b3a91823bf816db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869590420885095,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3105e9c370c576e3c2b7f7033575b471,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfe0e43ef655ecb760c941414f1de49e1cd57e156eaf03cfbc503bc80719eba,PodSandboxId:da2c894e5dfb6f28b71b5efec41389df398a03747668ff9c19f0f8e8231fd1cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724869302441758358,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d8b9a3c-ebe1-4ccd-bee2-18d173089e0c name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.746168036Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc4042a8-1511-4e47-8465-ff7bcabaaa21 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.746245897Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc4042a8-1511-4e47-8465-ff7bcabaaa21 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.748034212Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02ad4ca0-7539-485e-9295-437faa7b5aae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.748807073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870590748779001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02ad4ca0-7539-485e-9295-437faa7b5aae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.750426341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98f3970d-9933-4117-9af6-ef506bab0b07 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.750503961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98f3970d-9933-4117-9af6-ef506bab0b07 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.750862392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03453e02aa9963c458c74885933fadfdd4a6e0e674b401d0361eb3fdddaa3f7a,PodSandboxId:b36f0b3836447f5dfa26944f9a6b103e7d8ddce00e71ea4c0a99ee66f18ad845,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869603101176033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e09413-b695-420e-bf45-1f8f40ff7d05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e333800301f7f5da0b122571c15d46079b6c961b10e512cf03cf8bf22d3cb8c1,PodSandboxId:61e4b445685f3aadbb896849a4708aab7b1c419cdabe2087303eedc82d6718c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602024719344,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cz29x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd89ac5c-011e-4810-b681-fae999af2b6b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba7772bf1d640f5f4d72968ec1a1116f54245ab8b771d59b998c205ec11cb27,PodSandboxId:c3e3e300f9423a39398404417b02e02644ab23c0261df4a3ae93b58bd5496836,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602025001342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-djjbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
c3e4fc9-c257-40c5-bee2-6ad7335e8bf8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8726422898139a9becd3740f00b52f0615c668e67a93e3afcd345f69ab56174,PodSandboxId:3ce46a9948a7b50a9b5612fdb22b9467b2c2a2c1ee2a4af3b317da6a834d5f43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724869601331066351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzw4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46e7805-0395-40ae-92e6-ab43eb4b2b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4e7e2bb458e745c253692660330618c1fee23d63e5a177ac89d5371d6b6a87,PodSandboxId:4b4e3ea46ba40410cffa36c45bba7e65f368b981d6ec0267d239398307a0c7aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869590515966426,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be8df2afc75f1ee8c35748a5ed7b7b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2026b2021e7a1ef7de3865619a82c63953a5be5296464d79ccfb3ab1ac6a17,PodSandboxId:0057f243a3f39819998caf62a3c9331f5e175e2daa0d5aa1f52841f33e9a4541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869590513114880,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bca478b04e382b536c96c7dc6610af,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c7c0076722fa61714fa39f3164f13b568bcc53d70189f01180f8d897be448d,PodSandboxId:33f16483fba1945d5606cfdf50b4e677dca862c716421ce82d155dfad4756a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869590438408124,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dfa18e75e434576541ce5bd5918f485703f75fd26a44c32607e7389b3cf8ef,PodSandboxId:61d32464e573483e9c6f06a3357c9bf30043da9e4deea79b0b3a91823bf816db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869590420885095,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3105e9c370c576e3c2b7f7033575b471,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfe0e43ef655ecb760c941414f1de49e1cd57e156eaf03cfbc503bc80719eba,PodSandboxId:da2c894e5dfb6f28b71b5efec41389df398a03747668ff9c19f0f8e8231fd1cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724869302441758358,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98f3970d-9933-4117-9af6-ef506bab0b07 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.800955361Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=115d58dc-2806-430c-84df-de4bda1c9a43 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.801060598Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=115d58dc-2806-430c-84df-de4bda1c9a43 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.802683742Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cddf7898-a4b7-4fe2-a7e3-0dcee96468ac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.803288811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870590803255981,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cddf7898-a4b7-4fe2-a7e3-0dcee96468ac name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.803923046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50c98ddc-2140-4097-ad3a-581859706665 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.804000540Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50c98ddc-2140-4097-ad3a-581859706665 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.804299965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03453e02aa9963c458c74885933fadfdd4a6e0e674b401d0361eb3fdddaa3f7a,PodSandboxId:b36f0b3836447f5dfa26944f9a6b103e7d8ddce00e71ea4c0a99ee66f18ad845,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869603101176033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e09413-b695-420e-bf45-1f8f40ff7d05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e333800301f7f5da0b122571c15d46079b6c961b10e512cf03cf8bf22d3cb8c1,PodSandboxId:61e4b445685f3aadbb896849a4708aab7b1c419cdabe2087303eedc82d6718c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602024719344,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cz29x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd89ac5c-011e-4810-b681-fae999af2b6b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba7772bf1d640f5f4d72968ec1a1116f54245ab8b771d59b998c205ec11cb27,PodSandboxId:c3e3e300f9423a39398404417b02e02644ab23c0261df4a3ae93b58bd5496836,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602025001342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-djjbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
c3e4fc9-c257-40c5-bee2-6ad7335e8bf8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8726422898139a9becd3740f00b52f0615c668e67a93e3afcd345f69ab56174,PodSandboxId:3ce46a9948a7b50a9b5612fdb22b9467b2c2a2c1ee2a4af3b317da6a834d5f43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724869601331066351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzw4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46e7805-0395-40ae-92e6-ab43eb4b2b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4e7e2bb458e745c253692660330618c1fee23d63e5a177ac89d5371d6b6a87,PodSandboxId:4b4e3ea46ba40410cffa36c45bba7e65f368b981d6ec0267d239398307a0c7aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869590515966426,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be8df2afc75f1ee8c35748a5ed7b7b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2026b2021e7a1ef7de3865619a82c63953a5be5296464d79ccfb3ab1ac6a17,PodSandboxId:0057f243a3f39819998caf62a3c9331f5e175e2daa0d5aa1f52841f33e9a4541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869590513114880,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bca478b04e382b536c96c7dc6610af,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c7c0076722fa61714fa39f3164f13b568bcc53d70189f01180f8d897be448d,PodSandboxId:33f16483fba1945d5606cfdf50b4e677dca862c716421ce82d155dfad4756a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869590438408124,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dfa18e75e434576541ce5bd5918f485703f75fd26a44c32607e7389b3cf8ef,PodSandboxId:61d32464e573483e9c6f06a3357c9bf30043da9e4deea79b0b3a91823bf816db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869590420885095,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3105e9c370c576e3c2b7f7033575b471,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfe0e43ef655ecb760c941414f1de49e1cd57e156eaf03cfbc503bc80719eba,PodSandboxId:da2c894e5dfb6f28b71b5efec41389df398a03747668ff9c19f0f8e8231fd1cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724869302441758358,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50c98ddc-2140-4097-ad3a-581859706665 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.847725390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=180e4fab-c2f9-409c-912c-f2311feee4d8 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.847853281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=180e4fab-c2f9-409c-912c-f2311feee4d8 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.849276322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92cf61ee-80f1-4f93-9ef7-ddfa38d0ded7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.850251914Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870590850221640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92cf61ee-80f1-4f93-9ef7-ddfa38d0ded7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.851221492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dab5eb54-7192-4219-93f0-7d7569e3fa8b name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.851317225Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dab5eb54-7192-4219-93f0-7d7569e3fa8b name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:43:10 embed-certs-014980 crio[707]: time="2024-08-28 18:43:10.851666251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:03453e02aa9963c458c74885933fadfdd4a6e0e674b401d0361eb3fdddaa3f7a,PodSandboxId:b36f0b3836447f5dfa26944f9a6b103e7d8ddce00e71ea4c0a99ee66f18ad845,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869603101176033,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9e09413-b695-420e-bf45-1f8f40ff7d05,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e333800301f7f5da0b122571c15d46079b6c961b10e512cf03cf8bf22d3cb8c1,PodSandboxId:61e4b445685f3aadbb896849a4708aab7b1c419cdabe2087303eedc82d6718c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602024719344,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-cz29x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd89ac5c-011e-4810-b681-fae999af2b6b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aba7772bf1d640f5f4d72968ec1a1116f54245ab8b771d59b998c205ec11cb27,PodSandboxId:c3e3e300f9423a39398404417b02e02644ab23c0261df4a3ae93b58bd5496836,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869602025001342,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-djjbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e
c3e4fc9-c257-40c5-bee2-6ad7335e8bf8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8726422898139a9becd3740f00b52f0615c668e67a93e3afcd345f69ab56174,PodSandboxId:3ce46a9948a7b50a9b5612fdb22b9467b2c2a2c1ee2a4af3b317da6a834d5f43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1724869601331066351,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzw4m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b46e7805-0395-40ae-92e6-ab43eb4b2b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4e7e2bb458e745c253692660330618c1fee23d63e5a177ac89d5371d6b6a87,PodSandboxId:4b4e3ea46ba40410cffa36c45bba7e65f368b981d6ec0267d239398307a0c7aa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869590515966426,Labels:ma
p[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2be8df2afc75f1ee8c35748a5ed7b7b0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b2026b2021e7a1ef7de3865619a82c63953a5be5296464d79ccfb3ab1ac6a17,PodSandboxId:0057f243a3f39819998caf62a3c9331f5e175e2daa0d5aa1f52841f33e9a4541,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869590513114880,Labels:map[s
tring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81bca478b04e382b536c96c7dc6610af,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75c7c0076722fa61714fa39f3164f13b568bcc53d70189f01180f8d897be448d,PodSandboxId:33f16483fba1945d5606cfdf50b4e677dca862c716421ce82d155dfad4756a79,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869590438408124,Labels
:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94dfa18e75e434576541ce5bd5918f485703f75fd26a44c32607e7389b3cf8ef,PodSandboxId:61d32464e573483e9c6f06a3357c9bf30043da9e4deea79b0b3a91823bf816db,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869590420885095,Labels:map[string]string{io.
kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3105e9c370c576e3c2b7f7033575b471,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdfe0e43ef655ecb760c941414f1de49e1cd57e156eaf03cfbc503bc80719eba,PodSandboxId:da2c894e5dfb6f28b71b5efec41389df398a03747668ff9c19f0f8e8231fd1cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1724869302441758358,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-014980,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbd57483a24d8463a91c003dc722ccef,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dab5eb54-7192-4219-93f0-7d7569e3fa8b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	03453e02aa996       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   b36f0b3836447       storage-provisioner
	aba7772bf1d64       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   c3e3e300f9423       coredns-6f6b679f8f-djjbq
	e333800301f7f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   61e4b445685f3       coredns-6f6b679f8f-cz29x
	a872642289813       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   16 minutes ago      Running             kube-proxy                0                   3ce46a9948a7b       kube-proxy-hzw4m
	2b4e7e2bb458e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   16 minutes ago      Running             kube-scheduler            2                   4b4e3ea46ba40       kube-scheduler-embed-certs-014980
	4b2026b2021e7       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   16 minutes ago      Running             kube-controller-manager   2                   0057f243a3f39       kube-controller-manager-embed-certs-014980
	75c7c0076722f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   16 minutes ago      Running             kube-apiserver            2                   33f16483fba19       kube-apiserver-embed-certs-014980
	94dfa18e75e43       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   61d32464e5734       etcd-embed-certs-014980
	fdfe0e43ef655       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 minutes ago      Exited              kube-apiserver            1                   da2c894e5dfb6       kube-apiserver-embed-certs-014980
	
	
	==> coredns [aba7772bf1d640f5f4d72968ec1a1116f54245ab8b771d59b998c205ec11cb27] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e333800301f7f5da0b122571c15d46079b6c961b10e512cf03cf8bf22d3cb8c1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-014980
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-014980
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=embed-certs-014980
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T18_26_36_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 18:26:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-014980
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 18:43:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 18:42:03 +0000   Wed, 28 Aug 2024 18:26:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 18:42:03 +0000   Wed, 28 Aug 2024 18:26:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 18:42:03 +0000   Wed, 28 Aug 2024 18:26:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 18:42:03 +0000   Wed, 28 Aug 2024 18:26:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.130
	  Hostname:    embed-certs-014980
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a365d754a9c94a5cbea721201dfbc6d0
	  System UUID:                a365d754-a9c9-4a5c-bea7-21201dfbc6d0
	  Boot ID:                    10d1724c-b9f0-41cf-8a3a-201f51d4a3fb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-cz29x                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-djjbq                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-014980                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-014980             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-014980    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-hzw4m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-014980             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-7nkmb               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-014980 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-014980 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-014980 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-014980 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-014980 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-014980 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-014980 event: Registered Node embed-certs-014980 in Controller
	
	
	==> dmesg <==
	[  +0.051109] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036651] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.723842] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.898225] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.520794] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.185280] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.056169] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059224] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.178149] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.168839] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.291936] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[  +3.986311] systemd-fstab-generator[789]: Ignoring "noauto" option for root device
	[  +1.810839] systemd-fstab-generator[908]: Ignoring "noauto" option for root device
	[  +0.062133] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.492248] kauditd_printk_skb: 69 callbacks suppressed
	[  +8.024521] kauditd_printk_skb: 85 callbacks suppressed
	[Aug28 18:26] systemd-fstab-generator[2546]: Ignoring "noauto" option for root device
	[  +0.061496] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.010292] systemd-fstab-generator[2872]: Ignoring "noauto" option for root device
	[  +0.079197] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.732577] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.073541] systemd-fstab-generator[3017]: Ignoring "noauto" option for root device
	[  +6.948507] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [94dfa18e75e434576541ce5bd5918f485703f75fd26a44c32607e7389b3cf8ef] <==
	{"level":"info","ts":"2024-08-28T18:26:31.456330Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-28T18:26:31.462705Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T18:26:31.462754Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T18:36:31.498586Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":720}
	{"level":"info","ts":"2024-08-28T18:36:31.507855Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":720,"took":"8.741581ms","hash":3352187341,"current-db-size-bytes":2232320,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2232320,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-28T18:36:31.507954Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3352187341,"revision":720,"compact-revision":-1}
	{"level":"info","ts":"2024-08-28T18:41:31.505800Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":963}
	{"level":"info","ts":"2024-08-28T18:41:31.509480Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":963,"took":"3.109185ms","hash":2252885313,"current-db-size-bytes":2232320,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1568768,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-28T18:41:31.509611Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2252885313,"revision":963,"compact-revision":720}
	{"level":"info","ts":"2024-08-28T18:42:14.038766Z","caller":"traceutil/trace.go:171","msg":"trace[1043695875] linearizableReadLoop","detail":"{readStateIndex:1447; appliedIndex:1446; }","duration":"122.086361ms","start":"2024-08-28T18:42:13.916643Z","end":"2024-08-28T18:42:14.038730Z","steps":["trace[1043695875] 'read index received'  (duration: 121.858537ms)","trace[1043695875] 'applied index is now lower than readState.Index'  (duration: 227.261µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T18:42:14.039032Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.262674ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-28T18:42:14.039164Z","caller":"traceutil/trace.go:171","msg":"trace[1534235792] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:1243; }","duration":"122.515282ms","start":"2024-08-28T18:42:13.916624Z","end":"2024-08-28T18:42:14.039139Z","steps":["trace[1534235792] 'agreement among raft nodes before linearized reading'  (duration: 122.231545ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T18:42:14.039401Z","caller":"traceutil/trace.go:171","msg":"trace[2076122734] transaction","detail":"{read_only:false; response_revision:1243; number_of_response:1; }","duration":"264.593685ms","start":"2024-08-28T18:42:13.774788Z","end":"2024-08-28T18:42:14.039381Z","steps":["trace[2076122734] 'process raft request'  (duration: 263.806237ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T18:42:15.040470Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.394475ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14102056424842418964 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.130\" mod_revision:1236 > success:<request_put:<key:\"/registry/masterleases/192.168.72.130\" value_size:67 lease:4878684387987643154 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.130\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-28T18:42:15.040608Z","caller":"traceutil/trace.go:171","msg":"trace[1796965522] transaction","detail":"{read_only:false; response_revision:1244; number_of_response:1; }","duration":"274.776281ms","start":"2024-08-28T18:42:14.765814Z","end":"2024-08-28T18:42:15.040591Z","steps":["trace[1796965522] 'process raft request'  (duration: 144.570301ms)","trace[1796965522] 'compare'  (duration: 129.048422ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T18:42:15.298302Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"157.234798ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T18:42:15.298464Z","caller":"traceutil/trace.go:171","msg":"trace[13461416] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1244; }","duration":"157.392313ms","start":"2024-08-28T18:42:15.141043Z","end":"2024-08-28T18:42:15.298435Z","steps":["trace[13461416] 'range keys from in-memory index tree'  (duration: 157.172006ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T18:43:07.481471Z","caller":"traceutil/trace.go:171","msg":"trace[917646205] linearizableReadLoop","detail":"{readStateIndex:1503; appliedIndex:1502; }","duration":"341.326382ms","start":"2024-08-28T18:43:07.140108Z","end":"2024-08-28T18:43:07.481435Z","steps":["trace[917646205] 'read index received'  (duration: 341.146041ms)","trace[917646205] 'applied index is now lower than readState.Index'  (duration: 179.759µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T18:43:07.481754Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"341.622156ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T18:43:07.481797Z","caller":"traceutil/trace.go:171","msg":"trace[668980774] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1288; }","duration":"341.687249ms","start":"2024-08-28T18:43:07.140104Z","end":"2024-08-28T18:43:07.481791Z","steps":["trace[668980774] 'agreement among raft nodes before linearized reading'  (duration: 341.605715ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T18:43:07.481676Z","caller":"traceutil/trace.go:171","msg":"trace[523337227] transaction","detail":"{read_only:false; response_revision:1288; number_of_response:1; }","duration":"636.058659ms","start":"2024-08-28T18:43:06.845603Z","end":"2024-08-28T18:43:07.481662Z","steps":["trace[523337227] 'process raft request'  (duration: 635.706668ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T18:43:07.481827Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T18:43:07.140070Z","time spent":"341.744304ms","remote":"127.0.0.1:39016","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-28T18:43:07.481977Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T18:43:06.845518Z","time spent":"636.370046ms","remote":"127.0.0.1:39276","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-014980\" mod_revision:1279 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-014980\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-014980\" > >"}
	{"level":"warn","ts":"2024-08-28T18:43:07.757771Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.335055ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T18:43:07.757830Z","caller":"traceutil/trace.go:171","msg":"trace[234485653] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1288; }","duration":"109.404791ms","start":"2024-08-28T18:43:07.648415Z","end":"2024-08-28T18:43:07.757820Z","steps":["trace[234485653] 'range keys from in-memory index tree'  (duration: 109.286244ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:43:11 up 21 min,  0 users,  load average: 0.13, 0.20, 0.17
	Linux embed-certs-014980 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [75c7c0076722fa61714fa39f3164f13b568bcc53d70189f01180f8d897be448d] <==
	I0828 18:39:34.010953       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:39:34.011011       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:41:33.009723       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:41:33.009842       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0828 18:41:34.011304       1 handler_proxy.go:99] no RequestInfo found in the context
	W0828 18:41:34.011389       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:41:34.011455       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0828 18:41:34.011568       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:41:34.012746       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:41:34.012764       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:42:34.013399       1 handler_proxy.go:99] no RequestInfo found in the context
	W0828 18:42:34.013676       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:42:34.013730       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0828 18:42:34.013737       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:42:34.014946       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:42:34.014980       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [fdfe0e43ef655ecb760c941414f1de49e1cd57e156eaf03cfbc503bc80719eba] <==
	W0828 18:26:22.674173       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.674173       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.758654       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.848517       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.888182       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.936293       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.952450       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:22.999249       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.125746       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.172785       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.260146       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.307953       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.535462       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:23.547037       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:26.321287       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:26.958128       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.136137       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.172266       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.389355       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.433403       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.469907       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.539848       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.651643       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.660029       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0828 18:26:27.798179       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4b2026b2021e7a1ef7de3865619a82c63953a5be5296464d79ccfb3ab1ac6a17] <==
	E0828 18:38:10.037671       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:38:10.580467       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:38:40.043417       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:38:40.587738       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:39:10.053028       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:39:10.598742       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:39:40.060660       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:39:40.608128       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:40:10.068880       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:40:10.617453       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:40:40.076259       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:40:40.626084       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:41:10.084750       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:41:10.634421       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:41:40.090855       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:41:40.643343       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:42:03.489135       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-014980"
	E0828 18:42:10.098009       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:42:10.653353       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:42:40.104599       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:42:40.660927       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:42:53.610651       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="893.938µs"
	I0828 18:43:04.611592       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="231.33µs"
	E0828 18:43:10.116865       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:43:10.668246       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a8726422898139a9becd3740f00b52f0615c668e67a93e3afcd345f69ab56174] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 18:26:41.683707       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 18:26:41.710825       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.130"]
	E0828 18:26:41.710971       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 18:26:41.953292       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 18:26:41.953376       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 18:26:41.953492       1 server_linux.go:169] "Using iptables Proxier"
	I0828 18:26:41.960469       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 18:26:41.960829       1 server.go:483] "Version info" version="v1.31.0"
	I0828 18:26:41.960858       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:26:41.964688       1 config.go:197] "Starting service config controller"
	I0828 18:26:41.964719       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 18:26:41.964743       1 config.go:104] "Starting endpoint slice config controller"
	I0828 18:26:41.964746       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 18:26:41.968813       1 config.go:326] "Starting node config controller"
	I0828 18:26:41.968835       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 18:26:42.065416       1 shared_informer.go:320] Caches are synced for service config
	I0828 18:26:42.065369       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 18:26:42.075609       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2b4e7e2bb458e745c253692660330618c1fee23d63e5a177ac89d5371d6b6a87] <==
	W0828 18:26:33.111291       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 18:26:33.116320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:33.938848       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0828 18:26:33.938901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.074040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0828 18:26:34.074094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.126229       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 18:26:34.126293       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.131991       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0828 18:26:34.132053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.171507       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0828 18:26:34.171622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.244746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 18:26:34.244802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.256820       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 18:26:34.256982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.266789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0828 18:26:34.266841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.277466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 18:26:34.277614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.314412       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0828 18:26:34.314465       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 18:26:34.546614       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 18:26:34.546662       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0828 18:26:37.289248       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 18:42:11 embed-certs-014980 kubelet[2879]: E0828 18:42:11.592887    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7nkmb" podUID="bd303839-96c1-4e38-b7cb-2e66ba627a69"
	Aug 28 18:42:15 embed-certs-014980 kubelet[2879]: E0828 18:42:15.863727    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870535863371511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:15 embed-certs-014980 kubelet[2879]: E0828 18:42:15.863770    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870535863371511,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:25 embed-certs-014980 kubelet[2879]: E0828 18:42:25.865890    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870545865271873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:25 embed-certs-014980 kubelet[2879]: E0828 18:42:25.865992    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870545865271873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:26 embed-certs-014980 kubelet[2879]: E0828 18:42:26.591424    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7nkmb" podUID="bd303839-96c1-4e38-b7cb-2e66ba627a69"
	Aug 28 18:42:35 embed-certs-014980 kubelet[2879]: E0828 18:42:35.613886    2879 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 18:42:35 embed-certs-014980 kubelet[2879]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 18:42:35 embed-certs-014980 kubelet[2879]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 18:42:35 embed-certs-014980 kubelet[2879]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 18:42:35 embed-certs-014980 kubelet[2879]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 18:42:35 embed-certs-014980 kubelet[2879]: E0828 18:42:35.867227    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870555866902754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:35 embed-certs-014980 kubelet[2879]: E0828 18:42:35.867261    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870555866902754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:40 embed-certs-014980 kubelet[2879]: E0828 18:42:40.604122    2879 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 28 18:42:40 embed-certs-014980 kubelet[2879]: E0828 18:42:40.604495    2879 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 28 18:42:40 embed-certs-014980 kubelet[2879]: E0828 18:42:40.605345    2879 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kqxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-7nkmb_kube-system(bd303839-96c1-4e38-b7cb-2e66ba627a69): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 28 18:42:40 embed-certs-014980 kubelet[2879]: E0828 18:42:40.606791    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-7nkmb" podUID="bd303839-96c1-4e38-b7cb-2e66ba627a69"
	Aug 28 18:42:45 embed-certs-014980 kubelet[2879]: E0828 18:42:45.869139    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870565868683305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:45 embed-certs-014980 kubelet[2879]: E0828 18:42:45.869163    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870565868683305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:53 embed-certs-014980 kubelet[2879]: E0828 18:42:53.590905    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7nkmb" podUID="bd303839-96c1-4e38-b7cb-2e66ba627a69"
	Aug 28 18:42:55 embed-certs-014980 kubelet[2879]: E0828 18:42:55.870330    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870575870110647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:55 embed-certs-014980 kubelet[2879]: E0828 18:42:55.870354    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870575870110647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:43:04 embed-certs-014980 kubelet[2879]: E0828 18:43:04.592617    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-7nkmb" podUID="bd303839-96c1-4e38-b7cb-2e66ba627a69"
	Aug 28 18:43:05 embed-certs-014980 kubelet[2879]: E0828 18:43:05.873880    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870585873585518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:43:05 embed-certs-014980 kubelet[2879]: E0828 18:43:05.873928    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870585873585518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [03453e02aa9963c458c74885933fadfdd4a6e0e674b401d0361eb3fdddaa3f7a] <==
	I0828 18:26:43.192006       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 18:26:43.201448       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 18:26:43.201597       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 18:26:43.209505       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 18:26:43.209952       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"249220a2-967f-454b-a646-05777cbb0811", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-014980_9f3ed12a-e0f5-421a-a7cf-9808813b563a became leader
	I0828 18:26:43.210006       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-014980_9f3ed12a-e0f5-421a-a7cf-9808813b563a!
	I0828 18:26:43.310621       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-014980_9f3ed12a-e0f5-421a-a7cf-9808813b563a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-014980 -n embed-certs-014980
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-014980 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-7nkmb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-014980 describe pod metrics-server-6867b74b74-7nkmb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-014980 describe pod metrics-server-6867b74b74-7nkmb: exit status 1 (61.725596ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-7nkmb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-014980 describe pod metrics-server-6867b74b74-7nkmb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (436.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (362.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-072854 -n no-preload-072854
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-28 18:42:23.544433609 +0000 UTC m=+6662.594995589
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-072854 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-072854 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.968µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-072854 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-072854 -n no-preload-072854
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-072854 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-072854 logs -n 25: (1.145505681s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo find                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo crio                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-647068                                       | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:14 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-072854             | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-014980            | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-640552  | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-072854                  | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC | 28 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-131737        | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-014980                 | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-640552       | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-131737             | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:41 UTC | 28 Aug 24 18:41 UTC |
	| start   | -p newest-cni-835349 --memory=2200 --alsologtostderr   | newest-cni-835349            | jenkins | v1.33.1 | 28 Aug 24 18:41 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 18:41:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 18:41:45.386311   83534 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:41:45.386570   83534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:41:45.386579   83534 out.go:358] Setting ErrFile to fd 2...
	I0828 18:41:45.386584   83534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:41:45.386768   83534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:41:45.387326   83534 out.go:352] Setting JSON to false
	I0828 18:41:45.388245   83534 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8651,"bootTime":1724861854,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:41:45.388306   83534 start.go:139] virtualization: kvm guest
	I0828 18:41:45.390781   83534 out.go:177] * [newest-cni-835349] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:41:45.392256   83534 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:41:45.392277   83534 notify.go:220] Checking for updates...
	I0828 18:41:45.395252   83534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:41:45.396596   83534 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:41:45.398013   83534 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:41:45.399290   83534 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:41:45.400498   83534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:41:45.402018   83534 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:41:45.402154   83534 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:41:45.402253   83534 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:41:45.402331   83534 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:41:45.440104   83534 out.go:177] * Using the kvm2 driver based on user configuration
	I0828 18:41:45.441303   83534 start.go:297] selected driver: kvm2
	I0828 18:41:45.441321   83534 start.go:901] validating driver "kvm2" against <nil>
	I0828 18:41:45.441335   83534 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:41:45.442068   83534 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:41:45.442179   83534 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:41:45.457574   83534 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:41:45.457625   83534 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0828 18:41:45.457651   83534 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0828 18:41:45.457943   83534 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0828 18:41:45.457982   83534 cni.go:84] Creating CNI manager for ""
	I0828 18:41:45.457993   83534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:41:45.458003   83534 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 18:41:45.458057   83534 start.go:340] cluster config:
	{Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:41:45.458201   83534 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:41:45.459729   83534 out.go:177] * Starting "newest-cni-835349" primary control-plane node in "newest-cni-835349" cluster
	I0828 18:41:45.460801   83534 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:41:45.460836   83534 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:41:45.460847   83534 cache.go:56] Caching tarball of preloaded images
	I0828 18:41:45.460933   83534 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:41:45.460947   83534 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0828 18:41:45.461059   83534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/config.json ...
	I0828 18:41:45.461080   83534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/config.json: {Name:mk4d992218b2d0a545c5be40f64c3eef8b474d43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:41:45.461217   83534 start.go:360] acquireMachinesLock for newest-cni-835349: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:41:45.461256   83534 start.go:364] duration metric: took 21.277µs to acquireMachinesLock for "newest-cni-835349"
	I0828 18:41:45.461280   83534 start.go:93] Provisioning new machine with config: &{Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:41:45.461348   83534 start.go:125] createHost starting for "" (driver="kvm2")
	I0828 18:41:45.462804   83534 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0828 18:41:45.462969   83534 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:41:45.463019   83534 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:41:45.478287   83534 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43733
	I0828 18:41:45.478806   83534 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:41:45.479489   83534 main.go:141] libmachine: Using API Version  1
	I0828 18:41:45.479517   83534 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:41:45.480003   83534 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:41:45.480255   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetMachineName
	I0828 18:41:45.480471   83534 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:41:45.480660   83534 start.go:159] libmachine.API.Create for "newest-cni-835349" (driver="kvm2")
	I0828 18:41:45.480692   83534 client.go:168] LocalClient.Create starting
	I0828 18:41:45.480730   83534 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem
	I0828 18:41:45.480777   83534 main.go:141] libmachine: Decoding PEM data...
	I0828 18:41:45.480799   83534 main.go:141] libmachine: Parsing certificate...
	I0828 18:41:45.480877   83534 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem
	I0828 18:41:45.480908   83534 main.go:141] libmachine: Decoding PEM data...
	I0828 18:41:45.480929   83534 main.go:141] libmachine: Parsing certificate...
	I0828 18:41:45.480952   83534 main.go:141] libmachine: Running pre-create checks...
	I0828 18:41:45.480973   83534 main.go:141] libmachine: (newest-cni-835349) Calling .PreCreateCheck
	I0828 18:41:45.481407   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetConfigRaw
	I0828 18:41:45.481895   83534 main.go:141] libmachine: Creating machine...
	I0828 18:41:45.481915   83534 main.go:141] libmachine: (newest-cni-835349) Calling .Create
	I0828 18:41:45.482091   83534 main.go:141] libmachine: (newest-cni-835349) Creating KVM machine...
	I0828 18:41:45.483672   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found existing default KVM network
	I0828 18:41:45.485063   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:45.484927   83558 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:aa:a2:74} reservation:<nil>}
	I0828 18:41:45.486171   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:45.486049   83558 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000288970}
	I0828 18:41:45.486192   83534 main.go:141] libmachine: (newest-cni-835349) DBG | created network xml: 
	I0828 18:41:45.486202   83534 main.go:141] libmachine: (newest-cni-835349) DBG | <network>
	I0828 18:41:45.486215   83534 main.go:141] libmachine: (newest-cni-835349) DBG |   <name>mk-newest-cni-835349</name>
	I0828 18:41:45.486224   83534 main.go:141] libmachine: (newest-cni-835349) DBG |   <dns enable='no'/>
	I0828 18:41:45.486231   83534 main.go:141] libmachine: (newest-cni-835349) DBG |   
	I0828 18:41:45.486246   83534 main.go:141] libmachine: (newest-cni-835349) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0828 18:41:45.486258   83534 main.go:141] libmachine: (newest-cni-835349) DBG |     <dhcp>
	I0828 18:41:45.486271   83534 main.go:141] libmachine: (newest-cni-835349) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0828 18:41:45.486283   83534 main.go:141] libmachine: (newest-cni-835349) DBG |     </dhcp>
	I0828 18:41:45.486304   83534 main.go:141] libmachine: (newest-cni-835349) DBG |   </ip>
	I0828 18:41:45.486327   83534 main.go:141] libmachine: (newest-cni-835349) DBG |   
	I0828 18:41:45.486337   83534 main.go:141] libmachine: (newest-cni-835349) DBG | </network>
	I0828 18:41:45.486352   83534 main.go:141] libmachine: (newest-cni-835349) DBG | 
	I0828 18:41:45.491439   83534 main.go:141] libmachine: (newest-cni-835349) DBG | trying to create private KVM network mk-newest-cni-835349 192.168.50.0/24...
	I0828 18:41:45.565242   83534 main.go:141] libmachine: (newest-cni-835349) Setting up store path in /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349 ...
	I0828 18:41:45.565272   83534 main.go:141] libmachine: (newest-cni-835349) DBG | private KVM network mk-newest-cni-835349 192.168.50.0/24 created
	I0828 18:41:45.565289   83534 main.go:141] libmachine: (newest-cni-835349) Building disk image from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 18:41:45.565311   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:45.565159   83558 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:41:45.565333   83534 main.go:141] libmachine: (newest-cni-835349) Downloading /home/jenkins/minikube-integration/19529-10317/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso...
	I0828 18:41:45.817311   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:45.817180   83558 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa...
	I0828 18:41:46.108447   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:46.108324   83558 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/newest-cni-835349.rawdisk...
	I0828 18:41:46.108479   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Writing magic tar header
	I0828 18:41:46.108497   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Writing SSH key tar header
	I0828 18:41:46.108509   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:46.108467   83558 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349 ...
	I0828 18:41:46.108851   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349
	I0828 18:41:46.108879   83534 main.go:141] libmachine: (newest-cni-835349) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349 (perms=drwx------)
	I0828 18:41:46.108890   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube/machines
	I0828 18:41:46.108905   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:41:46.108918   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19529-10317
	I0828 18:41:46.108928   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0828 18:41:46.108939   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Checking permissions on dir: /home/jenkins
	I0828 18:41:46.108950   83534 main.go:141] libmachine: (newest-cni-835349) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube/machines (perms=drwxr-xr-x)
	I0828 18:41:46.108962   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Checking permissions on dir: /home
	I0828 18:41:46.108973   83534 main.go:141] libmachine: (newest-cni-835349) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317/.minikube (perms=drwxr-xr-x)
	I0828 18:41:46.108991   83534 main.go:141] libmachine: (newest-cni-835349) Setting executable bit set on /home/jenkins/minikube-integration/19529-10317 (perms=drwxrwxr-x)
	I0828 18:41:46.109002   83534 main.go:141] libmachine: (newest-cni-835349) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0828 18:41:46.109012   83534 main.go:141] libmachine: (newest-cni-835349) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0828 18:41:46.109022   83534 main.go:141] libmachine: (newest-cni-835349) Creating domain...
	I0828 18:41:46.109055   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Skipping /home - not owner
	I0828 18:41:46.110388   83534 main.go:141] libmachine: (newest-cni-835349) define libvirt domain using xml: 
	I0828 18:41:46.110409   83534 main.go:141] libmachine: (newest-cni-835349) <domain type='kvm'>
	I0828 18:41:46.110421   83534 main.go:141] libmachine: (newest-cni-835349)   <name>newest-cni-835349</name>
	I0828 18:41:46.110429   83534 main.go:141] libmachine: (newest-cni-835349)   <memory unit='MiB'>2200</memory>
	I0828 18:41:46.110438   83534 main.go:141] libmachine: (newest-cni-835349)   <vcpu>2</vcpu>
	I0828 18:41:46.110445   83534 main.go:141] libmachine: (newest-cni-835349)   <features>
	I0828 18:41:46.110463   83534 main.go:141] libmachine: (newest-cni-835349)     <acpi/>
	I0828 18:41:46.110475   83534 main.go:141] libmachine: (newest-cni-835349)     <apic/>
	I0828 18:41:46.110485   83534 main.go:141] libmachine: (newest-cni-835349)     <pae/>
	I0828 18:41:46.110495   83534 main.go:141] libmachine: (newest-cni-835349)     
	I0828 18:41:46.110517   83534 main.go:141] libmachine: (newest-cni-835349)   </features>
	I0828 18:41:46.110527   83534 main.go:141] libmachine: (newest-cni-835349)   <cpu mode='host-passthrough'>
	I0828 18:41:46.110535   83534 main.go:141] libmachine: (newest-cni-835349)   
	I0828 18:41:46.110546   83534 main.go:141] libmachine: (newest-cni-835349)   </cpu>
	I0828 18:41:46.110556   83534 main.go:141] libmachine: (newest-cni-835349)   <os>
	I0828 18:41:46.110564   83534 main.go:141] libmachine: (newest-cni-835349)     <type>hvm</type>
	I0828 18:41:46.110574   83534 main.go:141] libmachine: (newest-cni-835349)     <boot dev='cdrom'/>
	I0828 18:41:46.110582   83534 main.go:141] libmachine: (newest-cni-835349)     <boot dev='hd'/>
	I0828 18:41:46.110592   83534 main.go:141] libmachine: (newest-cni-835349)     <bootmenu enable='no'/>
	I0828 18:41:46.110600   83534 main.go:141] libmachine: (newest-cni-835349)   </os>
	I0828 18:41:46.110607   83534 main.go:141] libmachine: (newest-cni-835349)   <devices>
	I0828 18:41:46.110618   83534 main.go:141] libmachine: (newest-cni-835349)     <disk type='file' device='cdrom'>
	I0828 18:41:46.110656   83534 main.go:141] libmachine: (newest-cni-835349)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/boot2docker.iso'/>
	I0828 18:41:46.110674   83534 main.go:141] libmachine: (newest-cni-835349)       <target dev='hdc' bus='scsi'/>
	I0828 18:41:46.110683   83534 main.go:141] libmachine: (newest-cni-835349)       <readonly/>
	I0828 18:41:46.110693   83534 main.go:141] libmachine: (newest-cni-835349)     </disk>
	I0828 18:41:46.110702   83534 main.go:141] libmachine: (newest-cni-835349)     <disk type='file' device='disk'>
	I0828 18:41:46.110712   83534 main.go:141] libmachine: (newest-cni-835349)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0828 18:41:46.110728   83534 main.go:141] libmachine: (newest-cni-835349)       <source file='/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/newest-cni-835349.rawdisk'/>
	I0828 18:41:46.110740   83534 main.go:141] libmachine: (newest-cni-835349)       <target dev='hda' bus='virtio'/>
	I0828 18:41:46.110752   83534 main.go:141] libmachine: (newest-cni-835349)     </disk>
	I0828 18:41:46.110765   83534 main.go:141] libmachine: (newest-cni-835349)     <interface type='network'>
	I0828 18:41:46.110778   83534 main.go:141] libmachine: (newest-cni-835349)       <source network='mk-newest-cni-835349'/>
	I0828 18:41:46.110783   83534 main.go:141] libmachine: (newest-cni-835349)       <model type='virtio'/>
	I0828 18:41:46.110793   83534 main.go:141] libmachine: (newest-cni-835349)     </interface>
	I0828 18:41:46.110801   83534 main.go:141] libmachine: (newest-cni-835349)     <interface type='network'>
	I0828 18:41:46.110814   83534 main.go:141] libmachine: (newest-cni-835349)       <source network='default'/>
	I0828 18:41:46.110825   83534 main.go:141] libmachine: (newest-cni-835349)       <model type='virtio'/>
	I0828 18:41:46.110834   83534 main.go:141] libmachine: (newest-cni-835349)     </interface>
	I0828 18:41:46.110849   83534 main.go:141] libmachine: (newest-cni-835349)     <serial type='pty'>
	I0828 18:41:46.110861   83534 main.go:141] libmachine: (newest-cni-835349)       <target port='0'/>
	I0828 18:41:46.110877   83534 main.go:141] libmachine: (newest-cni-835349)     </serial>
	I0828 18:41:46.110887   83534 main.go:141] libmachine: (newest-cni-835349)     <console type='pty'>
	I0828 18:41:46.110895   83534 main.go:141] libmachine: (newest-cni-835349)       <target type='serial' port='0'/>
	I0828 18:41:46.110904   83534 main.go:141] libmachine: (newest-cni-835349)     </console>
	I0828 18:41:46.110918   83534 main.go:141] libmachine: (newest-cni-835349)     <rng model='virtio'>
	I0828 18:41:46.110935   83534 main.go:141] libmachine: (newest-cni-835349)       <backend model='random'>/dev/random</backend>
	I0828 18:41:46.110949   83534 main.go:141] libmachine: (newest-cni-835349)     </rng>
	I0828 18:41:46.110966   83534 main.go:141] libmachine: (newest-cni-835349)     
	I0828 18:41:46.110976   83534 main.go:141] libmachine: (newest-cni-835349)     
	I0828 18:41:46.110984   83534 main.go:141] libmachine: (newest-cni-835349)   </devices>
	I0828 18:41:46.110993   83534 main.go:141] libmachine: (newest-cni-835349) </domain>
	I0828 18:41:46.111003   83534 main.go:141] libmachine: (newest-cni-835349) 
	I0828 18:41:46.115092   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:4c:57:07 in network default
	I0828 18:41:46.115706   83534 main.go:141] libmachine: (newest-cni-835349) Ensuring networks are active...
	I0828 18:41:46.115725   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:46.116428   83534 main.go:141] libmachine: (newest-cni-835349) Ensuring network default is active
	I0828 18:41:46.116789   83534 main.go:141] libmachine: (newest-cni-835349) Ensuring network mk-newest-cni-835349 is active
	I0828 18:41:46.117351   83534 main.go:141] libmachine: (newest-cni-835349) Getting domain xml...
	I0828 18:41:46.118181   83534 main.go:141] libmachine: (newest-cni-835349) Creating domain...
	I0828 18:41:47.383150   83534 main.go:141] libmachine: (newest-cni-835349) Waiting to get IP...
	I0828 18:41:47.384045   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:47.384463   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:47.384487   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:47.384442   83558 retry.go:31] will retry after 258.98691ms: waiting for machine to come up
	I0828 18:41:47.645036   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:47.645741   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:47.645773   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:47.645675   83558 retry.go:31] will retry after 355.065716ms: waiting for machine to come up
	I0828 18:41:48.001775   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:48.002263   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:48.002284   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:48.002225   83558 retry.go:31] will retry after 328.667428ms: waiting for machine to come up
	I0828 18:41:48.332721   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:48.333142   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:48.333170   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:48.333087   83558 retry.go:31] will retry after 510.719449ms: waiting for machine to come up
	I0828 18:41:48.845956   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:48.846467   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:48.846495   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:48.846417   83558 retry.go:31] will retry after 736.36034ms: waiting for machine to come up
	I0828 18:41:49.584347   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:49.584804   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:49.584824   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:49.584778   83558 retry.go:31] will retry after 589.350244ms: waiting for machine to come up
	I0828 18:41:50.175631   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:50.176031   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:50.176057   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:50.175985   83558 retry.go:31] will retry after 960.954905ms: waiting for machine to come up
	I0828 18:41:51.138919   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:51.139477   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:51.139505   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:51.139452   83558 retry.go:31] will retry after 1.347582231s: waiting for machine to come up
	I0828 18:41:52.488434   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:52.488893   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:52.488921   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:52.488850   83558 retry.go:31] will retry after 1.449576528s: waiting for machine to come up
	I0828 18:41:53.940756   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:53.941215   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:53.941240   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:53.941178   83558 retry.go:31] will retry after 1.958167671s: waiting for machine to come up
	I0828 18:41:55.901280   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:55.901769   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:55.901787   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:55.901736   83558 retry.go:31] will retry after 2.56119449s: waiting for machine to come up
	I0828 18:41:58.466519   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:41:58.467025   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:41:58.467053   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:41:58.466967   83558 retry.go:31] will retry after 3.221726378s: waiting for machine to come up
	I0828 18:42:01.690849   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:01.691328   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find current IP address of domain newest-cni-835349 in network mk-newest-cni-835349
	I0828 18:42:01.691351   83534 main.go:141] libmachine: (newest-cni-835349) DBG | I0828 18:42:01.691291   83558 retry.go:31] will retry after 4.28252231s: waiting for machine to come up
	I0828 18:42:05.975156   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:05.975665   83534 main.go:141] libmachine: (newest-cni-835349) Found IP for machine: 192.168.50.179
	I0828 18:42:05.975693   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has current primary IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:05.975723   83534 main.go:141] libmachine: (newest-cni-835349) Reserving static IP address...
	I0828 18:42:05.976019   83534 main.go:141] libmachine: (newest-cni-835349) DBG | unable to find host DHCP lease matching {name: "newest-cni-835349", mac: "52:54:00:53:3a:ba", ip: "192.168.50.179"} in network mk-newest-cni-835349
	I0828 18:42:06.053277   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Getting to WaitForSSH function...
	I0828 18:42:06.053301   83534 main.go:141] libmachine: (newest-cni-835349) Reserved static IP address: 192.168.50.179
	I0828 18:42:06.053313   83534 main.go:141] libmachine: (newest-cni-835349) Waiting for SSH to be available...
	I0828 18:42:06.056065   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.056577   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:minikube Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:06.056609   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.056722   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Using SSH client type: external
	I0828 18:42:06.056745   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa (-rw-------)
	I0828 18:42:06.056785   83534 main.go:141] libmachine: (newest-cni-835349) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.179 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:42:06.056808   83534 main.go:141] libmachine: (newest-cni-835349) DBG | About to run SSH command:
	I0828 18:42:06.056821   83534 main.go:141] libmachine: (newest-cni-835349) DBG | exit 0
	I0828 18:42:06.181944   83534 main.go:141] libmachine: (newest-cni-835349) DBG | SSH cmd err, output: <nil>: 
	I0828 18:42:06.182249   83534 main.go:141] libmachine: (newest-cni-835349) KVM machine creation complete!
	I0828 18:42:06.182677   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetConfigRaw
	I0828 18:42:06.183203   83534 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:06.183380   83534 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:06.183592   83534 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0828 18:42:06.183608   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetState
	I0828 18:42:06.184956   83534 main.go:141] libmachine: Detecting operating system of created instance...
	I0828 18:42:06.184972   83534 main.go:141] libmachine: Waiting for SSH to be available...
	I0828 18:42:06.184979   83534 main.go:141] libmachine: Getting to WaitForSSH function...
	I0828 18:42:06.184987   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:06.187380   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.187747   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:06.187775   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.187862   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:06.188042   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:06.188198   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:06.188319   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:06.188464   83534 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:06.188666   83534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:06.188677   83534 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0828 18:42:06.289300   83534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:42:06.289329   83534 main.go:141] libmachine: Detecting the provisioner...
	I0828 18:42:06.289341   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:06.292387   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.292905   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:06.292938   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.293078   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:06.293257   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:06.293482   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:06.293648   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:06.293816   83534 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:06.293986   83534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:06.293997   83534 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0828 18:42:06.398175   83534 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0828 18:42:06.398272   83534 main.go:141] libmachine: found compatible host: buildroot
	I0828 18:42:06.398289   83534 main.go:141] libmachine: Provisioning with buildroot...
	I0828 18:42:06.398301   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetMachineName
	I0828 18:42:06.398547   83534 buildroot.go:166] provisioning hostname "newest-cni-835349"
	I0828 18:42:06.398568   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetMachineName
	I0828 18:42:06.398754   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:06.401353   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.401703   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:06.401733   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.401871   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:06.402039   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:06.402205   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:06.402353   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:06.402478   83534 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:06.402642   83534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:06.402658   83534 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-835349 && echo "newest-cni-835349" | sudo tee /etc/hostname
	I0828 18:42:06.516085   83534 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-835349
	
	I0828 18:42:06.516114   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:06.519132   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.519475   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:06.519506   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.519691   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:06.519914   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:06.520096   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:06.520220   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:06.520419   83534 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:06.520628   83534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:06.520653   83534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-835349' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-835349/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-835349' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:42:06.631175   83534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:42:06.631219   83534 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:42:06.631251   83534 buildroot.go:174] setting up certificates
	I0828 18:42:06.631264   83534 provision.go:84] configureAuth start
	I0828 18:42:06.631283   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetMachineName
	I0828 18:42:06.631574   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:42:06.634391   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.634708   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:06.634733   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.634917   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:06.636985   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.637402   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:06.637427   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.637551   83534 provision.go:143] copyHostCerts
	I0828 18:42:06.637608   83534 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:42:06.637632   83534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:42:06.637731   83534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:42:06.637859   83534 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:42:06.637869   83534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:42:06.637906   83534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:42:06.637997   83534 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:42:06.638006   83534 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:42:06.638040   83534 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:42:06.638128   83534 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.newest-cni-835349 san=[127.0.0.1 192.168.50.179 localhost minikube newest-cni-835349]
	I0828 18:42:06.704390   83534 provision.go:177] copyRemoteCerts
	I0828 18:42:06.704447   83534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:42:06.704472   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:06.707662   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.708088   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:06.708114   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.708327   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:06.708501   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:06.708667   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:06.708809   83534 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:06.792075   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:42:06.816783   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0828 18:42:06.839832   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 18:42:06.862632   83534 provision.go:87] duration metric: took 231.351194ms to configureAuth
	I0828 18:42:06.862660   83534 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:42:06.862834   83534 config.go:182] Loaded profile config "newest-cni-835349": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:42:06.862918   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:06.865339   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.865706   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:06.865736   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:06.866013   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:06.866227   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:06.866398   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:06.866592   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:06.866791   83534 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:06.867028   83534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:06.867054   83534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:42:07.099350   83534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:42:07.099400   83534 main.go:141] libmachine: Checking connection to Docker...
	I0828 18:42:07.099412   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetURL
	I0828 18:42:07.100783   83534 main.go:141] libmachine: (newest-cni-835349) DBG | Using libvirt version 6000000
	I0828 18:42:07.103501   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.103838   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:07.103866   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.104011   83534 main.go:141] libmachine: Docker is up and running!
	I0828 18:42:07.104027   83534 main.go:141] libmachine: Reticulating splines...
	I0828 18:42:07.104035   83534 client.go:171] duration metric: took 21.623332253s to LocalClient.Create
	I0828 18:42:07.104084   83534 start.go:167] duration metric: took 21.623405106s to libmachine.API.Create "newest-cni-835349"
	I0828 18:42:07.104098   83534 start.go:293] postStartSetup for "newest-cni-835349" (driver="kvm2")
	I0828 18:42:07.104110   83534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:42:07.104133   83534 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:07.104422   83534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:42:07.104453   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:07.107001   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.107357   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:07.107397   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.107507   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:07.107708   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:07.107876   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:07.108026   83534 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:07.192684   83534 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:42:07.196654   83534 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:42:07.196679   83534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:42:07.196744   83534 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:42:07.196829   83534 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:42:07.196949   83534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:42:07.206637   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:42:07.230445   83534 start.go:296] duration metric: took 126.333033ms for postStartSetup
	I0828 18:42:07.230502   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetConfigRaw
	I0828 18:42:07.231206   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:42:07.233998   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.234283   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:07.234308   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.234493   83534 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/config.json ...
	I0828 18:42:07.234686   83534 start.go:128] duration metric: took 21.773327851s to createHost
	I0828 18:42:07.234707   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:07.236676   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.236939   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:07.236963   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.237115   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:07.237302   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:07.237467   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:07.237601   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:07.237761   83534 main.go:141] libmachine: Using SSH client type: native
	I0828 18:42:07.237937   83534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.179 22 <nil> <nil>}
	I0828 18:42:07.237948   83534 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:42:07.339016   83534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724870527.312758964
	
	I0828 18:42:07.339038   83534 fix.go:216] guest clock: 1724870527.312758964
	I0828 18:42:07.339047   83534 fix.go:229] Guest: 2024-08-28 18:42:07.312758964 +0000 UTC Remote: 2024-08-28 18:42:07.234697519 +0000 UTC m=+21.888541785 (delta=78.061445ms)
	I0828 18:42:07.339075   83534 fix.go:200] guest clock delta is within tolerance: 78.061445ms
	I0828 18:42:07.339082   83534 start.go:83] releasing machines lock for "newest-cni-835349", held for 21.877815332s
	I0828 18:42:07.339120   83534 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:07.339388   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:42:07.341826   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.342224   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:07.342246   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.342435   83534 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:07.343079   83534 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:07.343297   83534 main.go:141] libmachine: (newest-cni-835349) Calling .DriverName
	I0828 18:42:07.343415   83534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:42:07.343459   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:07.343543   83534 ssh_runner.go:195] Run: cat /version.json
	I0828 18:42:07.343559   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHHostname
	I0828 18:42:07.346026   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.346221   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.346427   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:07.346455   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.346561   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:07.346580   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:07.346584   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:07.346752   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:07.346754   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHPort
	I0828 18:42:07.346918   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:07.346927   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHKeyPath
	I0828 18:42:07.347060   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetSSHUsername
	I0828 18:42:07.347072   83534 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:07.347226   83534 sshutil.go:53] new ssh client: &{IP:192.168.50.179 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/newest-cni-835349/id_rsa Username:docker}
	I0828 18:42:07.458746   83534 ssh_runner.go:195] Run: systemctl --version
	I0828 18:42:07.464532   83534 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:42:07.623898   83534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:42:07.629926   83534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:42:07.629996   83534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:42:07.648544   83534 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:42:07.648575   83534 start.go:495] detecting cgroup driver to use...
	I0828 18:42:07.648646   83534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:42:07.667023   83534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:42:07.681374   83534 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:42:07.681433   83534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:42:07.694907   83534 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:42:07.708485   83534 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:42:07.839226   83534 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:42:07.995855   83534 docker.go:233] disabling docker service ...
	I0828 18:42:07.995932   83534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:42:08.009756   83534 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:42:08.023770   83534 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:42:08.179346   83534 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:42:08.311421   83534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:42:08.325577   83534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:42:08.345583   83534 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:42:08.345638   83534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:42:08.356033   83534 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:42:08.356112   83534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:42:08.366542   83534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:42:08.377184   83534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:42:08.387322   83534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:42:08.397632   83534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:42:08.408614   83534 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:42:08.425756   83534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:42:08.435846   83534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:42:08.445072   83534 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:42:08.445132   83534 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:42:08.457350   83534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:42:08.467634   83534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:42:08.599236   83534 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:42:08.688530   83534 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:42:08.688610   83534 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:42:08.693391   83534 start.go:563] Will wait 60s for crictl version
	I0828 18:42:08.693460   83534 ssh_runner.go:195] Run: which crictl
	I0828 18:42:08.697592   83534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:42:08.740776   83534 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:42:08.740881   83534 ssh_runner.go:195] Run: crio --version
	I0828 18:42:08.771422   83534 ssh_runner.go:195] Run: crio --version
	I0828 18:42:08.802809   83534 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:42:08.803930   83534 main.go:141] libmachine: (newest-cni-835349) Calling .GetIP
	I0828 18:42:08.806879   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:08.807302   83534 main.go:141] libmachine: (newest-cni-835349) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:3a:ba", ip: ""} in network mk-newest-cni-835349: {Iface:virbr4 ExpiryTime:2024-08-28 19:41:59 +0000 UTC Type:0 Mac:52:54:00:53:3a:ba Iaid: IPaddr:192.168.50.179 Prefix:24 Hostname:newest-cni-835349 Clientid:01:52:54:00:53:3a:ba}
	I0828 18:42:08.807332   83534 main.go:141] libmachine: (newest-cni-835349) DBG | domain newest-cni-835349 has defined IP address 192.168.50.179 and MAC address 52:54:00:53:3a:ba in network mk-newest-cni-835349
	I0828 18:42:08.807541   83534 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:42:08.811729   83534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:42:08.826091   83534 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0828 18:42:08.827604   83534 kubeadm.go:883] updating cluster {Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:42:08.827734   83534 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:42:08.827804   83534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:42:08.859206   83534 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:42:08.859279   83534 ssh_runner.go:195] Run: which lz4
	I0828 18:42:08.863412   83534 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:42:08.867447   83534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:42:08.867494   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:42:10.080807   83534 crio.go:462] duration metric: took 1.217443988s to copy over tarball
	I0828 18:42:10.080870   83534 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:42:12.155927   83534 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.075027487s)
	I0828 18:42:12.155959   83534 crio.go:469] duration metric: took 2.075125951s to extract the tarball
	I0828 18:42:12.155968   83534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:42:12.192814   83534 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:42:12.240162   83534 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:42:12.240187   83534 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:42:12.240195   83534 kubeadm.go:934] updating node { 192.168.50.179 8443 v1.31.0 crio true true} ...
	I0828 18:42:12.240300   83534 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-835349 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.179
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:42:12.240393   83534 ssh_runner.go:195] Run: crio config
	I0828 18:42:12.289733   83534 cni.go:84] Creating CNI manager for ""
	I0828 18:42:12.289760   83534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:42:12.289774   83534 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0828 18:42:12.289802   83534 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.50.179 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-835349 NodeName:newest-cni-835349 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.179"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.50.179 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:42:12.289984   83534 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.179
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-835349"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.179
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.179"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:42:12.290051   83534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:42:12.300575   83534 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:42:12.300645   83534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:42:12.309835   83534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0828 18:42:12.325871   83534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:42:12.342912   83534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2285 bytes)
	I0828 18:42:12.359448   83534 ssh_runner.go:195] Run: grep 192.168.50.179	control-plane.minikube.internal$ /etc/hosts
	I0828 18:42:12.362993   83534 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.179	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:42:12.375416   83534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:42:12.510678   83534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:42:12.531954   83534 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349 for IP: 192.168.50.179
	I0828 18:42:12.531976   83534 certs.go:194] generating shared ca certs ...
	I0828 18:42:12.531991   83534 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:42:12.532147   83534 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:42:12.532221   83534 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:42:12.532235   83534 certs.go:256] generating profile certs ...
	I0828 18:42:12.532310   83534 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/client.key
	I0828 18:42:12.532337   83534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/client.crt with IP's: []
	I0828 18:42:12.749922   83534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/client.crt ...
	I0828 18:42:12.749959   83534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/client.crt: {Name:mkd92f9b3798582a901044f3ca87138a1349b95d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:42:12.750139   83534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/client.key ...
	I0828 18:42:12.750151   83534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/client.key: {Name:mk0a238bc1be4e29280417dda6ab2045654aac85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:42:12.750227   83534 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.key.0d40501c
	I0828 18:42:12.750242   83534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.crt.0d40501c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.179]
	I0828 18:42:12.791523   83534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.crt.0d40501c ...
	I0828 18:42:12.791546   83534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.crt.0d40501c: {Name:mk30a8b8f83609bac8d3434d405702834541ca63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:42:12.791697   83534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.key.0d40501c ...
	I0828 18:42:12.791709   83534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.key.0d40501c: {Name:mkea0159e330d2e7d0c929e199f2b991e7e284e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:42:12.791777   83534 certs.go:381] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.crt.0d40501c -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.crt
	I0828 18:42:12.791865   83534 certs.go:385] copying /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.key.0d40501c -> /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.key
	I0828 18:42:12.791920   83534 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.key
	I0828 18:42:12.791936   83534 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.crt with IP's: []
	I0828 18:42:12.986041   83534 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.crt ...
	I0828 18:42:12.986069   83534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.crt: {Name:mkc77608b151fd5efdcfb9eaf870d57fe1ae90a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:42:12.986241   83534 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.key ...
	I0828 18:42:12.986254   83534 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.key: {Name:mk8d972a2382afc2b9c58e2ed26e8c35b3c40e66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:42:12.986419   83534 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:42:12.986455   83534 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:42:12.986465   83534 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:42:12.986487   83534 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:42:12.986510   83534 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:42:12.986531   83534 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:42:12.986566   83534 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:42:12.987165   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:42:13.012572   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:42:13.039303   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:42:13.061404   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:42:13.084797   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 18:42:13.108675   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 18:42:13.131459   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:42:13.156530   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/newest-cni-835349/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:42:13.181905   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:42:13.205499   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:42:13.228235   83534 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:42:13.250337   83534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:42:13.265948   83534 ssh_runner.go:195] Run: openssl version
	I0828 18:42:13.271310   83534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:42:13.280856   83534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:42:13.285095   83534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:42:13.285147   83534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:42:13.290491   83534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:42:13.303933   83534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:42:13.314824   83534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:42:13.322125   83534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:42:13.322197   83534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:42:13.331635   83534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:42:13.350647   83534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:42:13.365950   83534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:42:13.370008   83534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:42:13.370068   83534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:42:13.375421   83534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:42:13.385293   83534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:42:13.389039   83534 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 18:42:13.389087   83534 kubeadm.go:392] StartCluster: {Name:newest-cni-835349 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-835349 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.179 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersio
n:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:42:13.389166   83534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:42:13.389223   83534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:42:13.427490   83534 cri.go:89] found id: ""
	I0828 18:42:13.427577   83534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:42:13.437080   83534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:42:13.446538   83534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:42:13.455571   83534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:42:13.455593   83534 kubeadm.go:157] found existing configuration files:
	
	I0828 18:42:13.455639   83534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:42:13.465479   83534 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:42:13.465529   83534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:42:13.474691   83534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:42:13.483342   83534 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:42:13.483409   83534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:42:13.493506   83534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:42:13.507738   83534 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:42:13.507792   83534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:42:13.516540   83534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:42:13.525012   83534 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:42:13.525071   83534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:42:13.534168   83534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:42:13.639124   83534 kubeadm.go:310] W0828 18:42:13.620504     824 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:42:13.641499   83534 kubeadm.go:310] W0828 18:42:13.623066     824 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:42:13.750352   83534 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:42:22.569118   83534 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 18:42:22.569198   83534 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:42:22.569319   83534 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:42:22.569461   83534 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:42:22.569591   83534 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 18:42:22.569692   83534 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:42:22.571113   83534 out.go:235]   - Generating certificates and keys ...
	I0828 18:42:22.571198   83534 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:42:22.571256   83534 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:42:22.571321   83534 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 18:42:22.571369   83534 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 18:42:22.571455   83534 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 18:42:22.571533   83534 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 18:42:22.571601   83534 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 18:42:22.571759   83534 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-835349] and IPs [192.168.50.179 127.0.0.1 ::1]
	I0828 18:42:22.571816   83534 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 18:42:22.571957   83534 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-835349] and IPs [192.168.50.179 127.0.0.1 ::1]
	I0828 18:42:22.572052   83534 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 18:42:22.572152   83534 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 18:42:22.572205   83534 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 18:42:22.572253   83534 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:42:22.572299   83534 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:42:22.572369   83534 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 18:42:22.572443   83534 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:42:22.572537   83534 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:42:22.572608   83534 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:42:22.572713   83534 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:42:22.572814   83534 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:42:22.574229   83534 out.go:235]   - Booting up control plane ...
	I0828 18:42:22.574444   83534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:42:22.574608   83534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:42:22.574711   83534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:42:22.574837   83534 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:42:22.574959   83534 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:42:22.575016   83534 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:42:22.575133   83534 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 18:42:22.575226   83534 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 18:42:22.575275   83534 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.434511ms
	I0828 18:42:22.575339   83534 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 18:42:22.575389   83534 kubeadm.go:310] [api-check] The API server is healthy after 5.001994887s
	I0828 18:42:22.575485   83534 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 18:42:22.575592   83534 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 18:42:22.575644   83534 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 18:42:22.575827   83534 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-835349 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 18:42:22.575906   83534 kubeadm.go:310] [bootstrap-token] Using token: 74ziap.q3jmeaov6wk11rld
	I0828 18:42:22.577312   83534 out.go:235]   - Configuring RBAC rules ...
	I0828 18:42:22.577463   83534 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 18:42:22.577578   83534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 18:42:22.577766   83534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 18:42:22.577943   83534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 18:42:22.578122   83534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 18:42:22.578225   83534 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 18:42:22.578405   83534 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 18:42:22.578450   83534 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 18:42:22.578490   83534 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 18:42:22.578496   83534 kubeadm.go:310] 
	I0828 18:42:22.578571   83534 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 18:42:22.578584   83534 kubeadm.go:310] 
	I0828 18:42:22.578687   83534 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 18:42:22.578696   83534 kubeadm.go:310] 
	I0828 18:42:22.578730   83534 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 18:42:22.578817   83534 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 18:42:22.578889   83534 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 18:42:22.578897   83534 kubeadm.go:310] 
	I0828 18:42:22.578968   83534 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 18:42:22.578977   83534 kubeadm.go:310] 
	I0828 18:42:22.579041   83534 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 18:42:22.579066   83534 kubeadm.go:310] 
	I0828 18:42:22.579151   83534 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 18:42:22.579252   83534 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 18:42:22.579388   83534 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 18:42:22.579407   83534 kubeadm.go:310] 
	I0828 18:42:22.579483   83534 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 18:42:22.579546   83534 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 18:42:22.579552   83534 kubeadm.go:310] 
	I0828 18:42:22.579648   83534 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 74ziap.q3jmeaov6wk11rld \
	I0828 18:42:22.579777   83534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 18:42:22.579824   83534 kubeadm.go:310] 	--control-plane 
	I0828 18:42:22.579831   83534 kubeadm.go:310] 
	I0828 18:42:22.579898   83534 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 18:42:22.579906   83534 kubeadm.go:310] 
	I0828 18:42:22.579975   83534 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 74ziap.q3jmeaov6wk11rld \
	I0828 18:42:22.580084   83534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 18:42:22.580100   83534 cni.go:84] Creating CNI manager for ""
	I0828 18:42:22.580107   83534 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:42:22.581247   83534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.136905418Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2e85ecf-948a-4838-b71e-e6cf915d4e1a name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.137093253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869405055448900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f3ce8453601b338979dcf74433a3f120cdc495b8c66e8ac2011c8489140ae8d,PodSandboxId:339674dac8537cb6f0fb38b8849472ee23c4432833ad2dd715fc4a995242ab2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869385181705073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e90c8374-18c5-4c02-8189-c6ebe492f3a8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9,PodSandboxId:20809ee4cdfe8119c310fa072101a30c43a6cbd35c62dacbf602e4cda04d2fbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869381934831120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fjclq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3279bcbb-5b7f-464a-a6d0-4206b877065b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7,PodSandboxId:50e4e0e35116c7f5c5fc03ac768d9580078e850585eeda2fbcdd750ddded5e0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869374327116719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a136ed96-1b09-43d2-94
71-fdc7f17f5760,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869374227567945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409d
f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4,PodSandboxId:fd675e7ed02eecb297684ba8ddd95119391283098c5be18e8684dfd0a61e073e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869369559519831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541cc869670d255c9
f4fd662604b4660,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb,PodSandboxId:5f45bc15601615876a0dd9ed129b6ab178fdda9c02d6e7ead3ae37e6fe2d73cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869369549106295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2838f91e3abe4d0cf13006dc6c2702,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64,PodSandboxId:f3aa6cca52c6c6724e068cb9820a28a433b018c9705a3039d9f5fb3c69cc2ed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869369520508481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980955ced74df61d81166c77aaac11ef,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83,PodSandboxId:7f52d06fd489d5c139b89a795c7c1ea626d6653b5e2b3dd3fac50bc14a2d5b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869369454773456,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214b6cc2ce1e526ab841b14896d802f3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2e85ecf-948a-4838-b71e-e6cf915d4e1a name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.177776945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd3cf886-62fd-401e-83ce-3c0f9e0d9ddd name=/runtime.v1.RuntimeService/Version
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.177880931Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd3cf886-62fd-401e-83ce-3c0f9e0d9ddd name=/runtime.v1.RuntimeService/Version
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.178891763Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=994fc65d-c225-49c1-84d8-e719c8c5e23a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.179296107Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870544179232808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=994fc65d-c225-49c1-84d8-e719c8c5e23a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.179906520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ac256f6-147c-4d51-8536-880aa8f0a740 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.179968051Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ac256f6-147c-4d51-8536-880aa8f0a740 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.180193147Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869405055448900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f3ce8453601b338979dcf74433a3f120cdc495b8c66e8ac2011c8489140ae8d,PodSandboxId:339674dac8537cb6f0fb38b8849472ee23c4432833ad2dd715fc4a995242ab2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869385181705073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e90c8374-18c5-4c02-8189-c6ebe492f3a8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9,PodSandboxId:20809ee4cdfe8119c310fa072101a30c43a6cbd35c62dacbf602e4cda04d2fbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869381934831120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fjclq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3279bcbb-5b7f-464a-a6d0-4206b877065b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7,PodSandboxId:50e4e0e35116c7f5c5fc03ac768d9580078e850585eeda2fbcdd750ddded5e0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869374327116719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a136ed96-1b09-43d2-94
71-fdc7f17f5760,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869374227567945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409d
f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4,PodSandboxId:fd675e7ed02eecb297684ba8ddd95119391283098c5be18e8684dfd0a61e073e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869369559519831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541cc869670d255c9
f4fd662604b4660,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb,PodSandboxId:5f45bc15601615876a0dd9ed129b6ab178fdda9c02d6e7ead3ae37e6fe2d73cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869369549106295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2838f91e3abe4d0cf13006dc6c2702,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64,PodSandboxId:f3aa6cca52c6c6724e068cb9820a28a433b018c9705a3039d9f5fb3c69cc2ed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869369520508481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980955ced74df61d81166c77aaac11ef,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83,PodSandboxId:7f52d06fd489d5c139b89a795c7c1ea626d6653b5e2b3dd3fac50bc14a2d5b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869369454773456,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214b6cc2ce1e526ab841b14896d802f3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ac256f6-147c-4d51-8536-880aa8f0a740 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.216424316Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=e74a1ee7-a91a-43a4-ac01-2ffa2c6ba6fa name=/runtime.v1.RuntimeService/Status
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.216640090Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e74a1ee7-a91a-43a4-ac01-2ffa2c6ba6fa name=/runtime.v1.RuntimeService/Status
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.218133559Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b18be0e-1532-402b-b1a6-98c38aa16ad6 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.218202801Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b18be0e-1532-402b-b1a6-98c38aa16ad6 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.227566097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=83a82172-89c8-40a9-b31f-4e11c1d55da7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.228522290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870544228495139,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=83a82172-89c8-40a9-b31f-4e11c1d55da7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.232627403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81a11753-e2e4-4e36-85a9-6758bbe9154a name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.232688765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81a11753-e2e4-4e36-85a9-6758bbe9154a name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.232891037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869405055448900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f3ce8453601b338979dcf74433a3f120cdc495b8c66e8ac2011c8489140ae8d,PodSandboxId:339674dac8537cb6f0fb38b8849472ee23c4432833ad2dd715fc4a995242ab2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869385181705073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e90c8374-18c5-4c02-8189-c6ebe492f3a8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9,PodSandboxId:20809ee4cdfe8119c310fa072101a30c43a6cbd35c62dacbf602e4cda04d2fbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869381934831120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fjclq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3279bcbb-5b7f-464a-a6d0-4206b877065b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7,PodSandboxId:50e4e0e35116c7f5c5fc03ac768d9580078e850585eeda2fbcdd750ddded5e0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869374327116719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a136ed96-1b09-43d2-94
71-fdc7f17f5760,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869374227567945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409d
f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4,PodSandboxId:fd675e7ed02eecb297684ba8ddd95119391283098c5be18e8684dfd0a61e073e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869369559519831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541cc869670d255c9
f4fd662604b4660,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb,PodSandboxId:5f45bc15601615876a0dd9ed129b6ab178fdda9c02d6e7ead3ae37e6fe2d73cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869369549106295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2838f91e3abe4d0cf13006dc6c2702,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64,PodSandboxId:f3aa6cca52c6c6724e068cb9820a28a433b018c9705a3039d9f5fb3c69cc2ed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869369520508481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980955ced74df61d81166c77aaac11ef,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83,PodSandboxId:7f52d06fd489d5c139b89a795c7c1ea626d6653b5e2b3dd3fac50bc14a2d5b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869369454773456,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214b6cc2ce1e526ab841b14896d802f3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81a11753-e2e4-4e36-85a9-6758bbe9154a name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.272773747Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bf8b9ca1-d175-45ab-8b78-32e7e54d297d name=/runtime.v1.RuntimeService/Version
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.272850952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bf8b9ca1-d175-45ab-8b78-32e7e54d297d name=/runtime.v1.RuntimeService/Version
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.274033696Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20a47593-7034-48ac-863b-a6065d92edbc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.274464716Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870544274399299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20a47593-7034-48ac-863b-a6065d92edbc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.274938094Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a78880d6-f0e0-4d5e-82f0-4eb36d585bb7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.274989642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a78880d6-f0e0-4d5e-82f0-4eb36d585bb7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:42:24 no-preload-072854 crio[707]: time="2024-08-28 18:42:24.275200394Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1724869405055448900,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409df,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f3ce8453601b338979dcf74433a3f120cdc495b8c66e8ac2011c8489140ae8d,PodSandboxId:339674dac8537cb6f0fb38b8849472ee23c4432833ad2dd715fc4a995242ab2c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1724869385181705073,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e90c8374-18c5-4c02-8189-c6ebe492f3a8,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9,PodSandboxId:20809ee4cdfe8119c310fa072101a30c43a6cbd35c62dacbf602e4cda04d2fbe,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1724869381934831120,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-fjclq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3279bcbb-5b7f-464a-a6d0-4206b877065b,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7,PodSandboxId:50e4e0e35116c7f5c5fc03ac768d9580078e850585eeda2fbcdd750ddded5e0a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1724869374327116719,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tfxfd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a136ed96-1b09-43d2-94
71-fdc7f17f5760,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a,PodSandboxId:a7204ccbcb800ba388f505ad663cfa6962b5bef8ba06a79840e48ca42fc1413f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1724869374227567945,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fdf9f52-ebdf-4ab6-8f34-1e773a4409d
f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4,PodSandboxId:fd675e7ed02eecb297684ba8ddd95119391283098c5be18e8684dfd0a61e073e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1724869369559519831,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 541cc869670d255c9
f4fd662604b4660,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb,PodSandboxId:5f45bc15601615876a0dd9ed129b6ab178fdda9c02d6e7ead3ae37e6fe2d73cb,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1724869369549106295,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2838f91e3abe4d0cf13006dc6c2702,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64,PodSandboxId:f3aa6cca52c6c6724e068cb9820a28a433b018c9705a3039d9f5fb3c69cc2ed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1724869369520508481,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980955ced74df61d81166c77aaac11ef,},Annotations:map[string]string{io.kuber
netes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83,PodSandboxId:7f52d06fd489d5c139b89a795c7c1ea626d6653b5e2b3dd3fac50bc14a2d5b4f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1724869369454773456,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-072854,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 214b6cc2ce1e526ab841b14896d802f3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a78880d6-f0e0-4d5e-82f0-4eb36d585bb7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	176a416d0685e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   a7204ccbcb800       storage-provisioner
	4f3ce8453601b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   339674dac8537       busybox
	b670cbb724f62       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   20809ee4cdfe8       coredns-6f6b679f8f-fjclq
	f1e183b4b26b5       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      19 minutes ago      Running             kube-proxy                1                   50e4e0e35116c       kube-proxy-tfxfd
	851b142e4bcda       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   a7204ccbcb800       storage-provisioner
	4be517729ec13       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      19 minutes ago      Running             kube-controller-manager   1                   fd675e7ed02ee       kube-controller-manager-no-preload-072854
	701d65f0dbe97       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   5f45bc1560161       etcd-no-preload-072854
	5eb6f94089b12       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      19 minutes ago      Running             kube-scheduler            1                   f3aa6cca52c6c       kube-scheduler-no-preload-072854
	2cb3211855569       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      19 minutes ago      Running             kube-apiserver            1                   7f52d06fd489d       kube-apiserver-no-preload-072854
	
	
	==> coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47460 - 61815 "HINFO IN 6038158238618917869.2219171541028845927. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012350093s
	
	
	==> describe nodes <==
	Name:               no-preload-072854
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-072854
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=no-preload-072854
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T18_13_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 18:13:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-072854
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 18:42:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 18:38:42 +0000   Wed, 28 Aug 2024 18:13:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 18:38:42 +0000   Wed, 28 Aug 2024 18:13:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 18:38:42 +0000   Wed, 28 Aug 2024 18:13:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 18:38:42 +0000   Wed, 28 Aug 2024 18:23:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.138
	  Hostname:    no-preload-072854
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 546e5317073343a7b3a22fbeb711cba0
	  System UUID:                546e5317-0733-43a7-b3a2-2fbeb711cba0
	  Boot ID:                    0132aa51-9333-4ab3-9af1-517df4f8d990
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-fjclq                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-072854                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-072854             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-072854    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-tfxfd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-072854             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-d5x89              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-072854 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-072854 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-072854 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-072854 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-072854 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-072854 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-072854 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-072854 event: Registered Node no-preload-072854 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-072854 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-072854 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-072854 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-072854 event: Registered Node no-preload-072854 in Controller
	
	
	==> dmesg <==
	[Aug28 18:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053066] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045138] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.112880] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.935648] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.541538] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.456956] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.068431] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057051] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.213693] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[  +0.124885] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.284371] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[ +15.012638] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.070429] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.713717] systemd-fstab-generator[1411]: Ignoring "noauto" option for root device
	[  +5.308619] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.298592] systemd-fstab-generator[2035]: Ignoring "noauto" option for root device
	[  +3.722510] kauditd_printk_skb: 61 callbacks suppressed
	[Aug28 18:23] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] <==
	{"level":"info","ts":"2024-08-28T18:22:51.610515Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T18:22:51.610654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-28T18:22:51.611206Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-28T18:22:51.611257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-28T18:22:51.611885Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:22:51.611923Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-28T18:22:51.612864Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-28T18:22:51.613117Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.138:2379"}
	{"level":"info","ts":"2024-08-28T18:32:51.643021Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":856}
	{"level":"info","ts":"2024-08-28T18:32:51.654338Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":856,"took":"10.499808ms","hash":259965689,"current-db-size-bytes":2662400,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":2662400,"current-db-size-in-use":"2.7 MB"}
	{"level":"info","ts":"2024-08-28T18:32:51.654508Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":259965689,"revision":856,"compact-revision":-1}
	{"level":"info","ts":"2024-08-28T18:37:51.652422Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1098}
	{"level":"info","ts":"2024-08-28T18:37:51.656804Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1098,"took":"3.886476ms","hash":3858664215,"current-db-size-bytes":2662400,"current-db-size":"2.7 MB","current-db-size-in-use-bytes":1597440,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-28T18:37:51.656863Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3858664215,"revision":1098,"compact-revision":856}
	{"level":"info","ts":"2024-08-28T18:42:14.836505Z","caller":"traceutil/trace.go:171","msg":"trace[709005007] linearizableReadLoop","detail":"{readStateIndex:1824; appliedIndex:1823; }","duration":"164.427637ms","start":"2024-08-28T18:42:14.672034Z","end":"2024-08-28T18:42:14.836462Z","steps":["trace[709005007] 'read index received'  (duration: 164.227597ms)","trace[709005007] 'applied index is now lower than readState.Index'  (duration: 199.301µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T18:42:14.836582Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T18:42:14.302251Z","time spent":"534.317738ms","remote":"127.0.0.1:45910","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-08-28T18:42:14.837051Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.782363ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T18:42:14.837145Z","caller":"traceutil/trace.go:171","msg":"trace[461096212] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1553; }","duration":"145.952637ms","start":"2024-08-28T18:42:14.691175Z","end":"2024-08-28T18:42:14.837128Z","steps":["trace[461096212] 'agreement among raft nodes before linearized reading'  (duration: 145.752465ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T18:42:14.837351Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"165.272996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-08-28T18:42:14.838010Z","caller":"traceutil/trace.go:171","msg":"trace[1040977835] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1553; }","duration":"165.967031ms","start":"2024-08-28T18:42:14.672030Z","end":"2024-08-28T18:42:14.837997Z","steps":["trace[1040977835] 'agreement among raft nodes before linearized reading'  (duration: 164.665902ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T18:42:15.319440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.494486ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3069644712611671676 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.138\" mod_revision:1546 > success:<request_put:<key:\"/registry/masterleases/192.168.61.138\" value_size:67 lease:3069644712611671672 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.138\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-28T18:42:15.319770Z","caller":"traceutil/trace.go:171","msg":"trace[121760240] transaction","detail":"{read_only:false; response_revision:1554; number_of_response:1; }","duration":"481.711925ms","start":"2024-08-28T18:42:14.838031Z","end":"2024-08-28T18:42:15.319743Z","steps":["trace[121760240] 'process raft request'  (duration: 95.878986ms)","trace[121760240] 'compare'  (duration: 384.396298ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-28T18:42:15.319884Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T18:42:14.838012Z","time spent":"481.820909ms","remote":"127.0.0.1:45910","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.138\" mod_revision:1546 > success:<request_put:<key:\"/registry/masterleases/192.168.61.138\" value_size:67 lease:3069644712611671672 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.138\" > >"}
	{"level":"info","ts":"2024-08-28T18:42:15.320480Z","caller":"traceutil/trace.go:171","msg":"trace[61639019] transaction","detail":"{read_only:false; response_revision:1555; number_of_response:1; }","duration":"478.943444ms","start":"2024-08-28T18:42:14.841524Z","end":"2024-08-28T18:42:15.320467Z","steps":["trace[61639019] 'process raft request'  (duration: 478.125652ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T18:42:15.320696Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-28T18:42:14.841506Z","time spent":"479.148138ms","remote":"127.0.0.1:46046","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1553 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 18:42:24 up 20 min,  0 users,  load average: 0.24, 0.14, 0.08
	Linux no-preload-072854 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0828 18:37:53.899298       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:37:53.899579       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0828 18:37:53.900813       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:37:53.900895       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:38:53.901681       1 handler_proxy.go:99] no RequestInfo found in the context
	W0828 18:38:53.901681       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:38:53.902018       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0828 18:38:53.902069       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0828 18:38:53.903182       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:38:53.903334       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0828 18:40:53.903700       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:40:53.903888       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0828 18:40:53.903699       1 handler_proxy.go:99] no RequestInfo found in the context
	E0828 18:40:53.903939       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0828 18:40:53.905172       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0828 18:40:53.905208       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] <==
	E0828 18:36:56.630146       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:36:57.072120       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:37:26.636242       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:37:27.080410       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:37:56.642404       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:37:57.088015       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:38:26.648486       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:38:27.095286       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:38:42.271516       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-072854"
	E0828 18:38:56.655314       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:38:57.102394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0828 18:39:00.898934       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.117474ms"
	I0828 18:39:12.898407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="75.074µs"
	E0828 18:39:26.662325       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:39:27.112444       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:39:56.669351       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:39:57.120704       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:40:26.676215       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:40:27.128719       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:40:56.682271       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:40:57.136992       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:41:26.688119       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:41:27.144837       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0828 18:41:56.695572       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0828 18:41:57.153988       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0828 18:22:54.528705       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0828 18:22:54.537296       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.138"]
	E0828 18:22:54.537408       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 18:22:54.601928       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0828 18:22:54.602001       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0828 18:22:54.602029       1 server_linux.go:169] "Using iptables Proxier"
	I0828 18:22:54.609234       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 18:22:54.609509       1 server.go:483] "Version info" version="v1.31.0"
	I0828 18:22:54.609569       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:22:54.624733       1 config.go:104] "Starting endpoint slice config controller"
	I0828 18:22:54.624835       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 18:22:54.624859       1 config.go:197] "Starting service config controller"
	I0828 18:22:54.624915       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 18:22:54.632082       1 config.go:326] "Starting node config controller"
	I0828 18:22:54.632177       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 18:22:54.725503       1 shared_informer.go:320] Caches are synced for service config
	I0828 18:22:54.725542       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0828 18:22:54.732313       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] <==
	I0828 18:22:50.706938       1 serving.go:386] Generated self-signed cert in-memory
	W0828 18:22:52.849647       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0828 18:22:52.849724       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0828 18:22:52.849735       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0828 18:22:52.849740       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0828 18:22:52.922307       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0828 18:22:52.926384       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 18:22:52.935280       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0828 18:22:52.935685       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0828 18:22:52.936684       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0828 18:22:52.935707       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0828 18:22:53.037465       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 18:41:09 no-preload-072854 kubelet[1418]: E0828 18:41:09.112240    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870469111281157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:41:19 no-preload-072854 kubelet[1418]: E0828 18:41:19.113349    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870479113129823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:41:19 no-preload-072854 kubelet[1418]: E0828 18:41:19.113385    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870479113129823,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:41:19 no-preload-072854 kubelet[1418]: E0828 18:41:19.880198    1418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d5x89" podUID="2f77d1e5-7779-46f9-881d-ff1a6a25098e"
	Aug 28 18:41:29 no-preload-072854 kubelet[1418]: E0828 18:41:29.117892    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870489115133051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:41:29 no-preload-072854 kubelet[1418]: E0828 18:41:29.117940    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870489115133051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:41:32 no-preload-072854 kubelet[1418]: E0828 18:41:32.880108    1418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d5x89" podUID="2f77d1e5-7779-46f9-881d-ff1a6a25098e"
	Aug 28 18:41:39 no-preload-072854 kubelet[1418]: E0828 18:41:39.118802    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870499118535073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:41:39 no-preload-072854 kubelet[1418]: E0828 18:41:39.118837    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870499118535073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:41:45 no-preload-072854 kubelet[1418]: E0828 18:41:45.881870    1418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d5x89" podUID="2f77d1e5-7779-46f9-881d-ff1a6a25098e"
	Aug 28 18:41:48 no-preload-072854 kubelet[1418]: E0828 18:41:48.899092    1418 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 28 18:41:48 no-preload-072854 kubelet[1418]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 28 18:41:48 no-preload-072854 kubelet[1418]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 28 18:41:48 no-preload-072854 kubelet[1418]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 28 18:41:48 no-preload-072854 kubelet[1418]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 28 18:41:49 no-preload-072854 kubelet[1418]: E0828 18:41:49.120947    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870509120196456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:41:49 no-preload-072854 kubelet[1418]: E0828 18:41:49.120986    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870509120196456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:41:59 no-preload-072854 kubelet[1418]: E0828 18:41:59.123376    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870519123019515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:41:59 no-preload-072854 kubelet[1418]: E0828 18:41:59.123743    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870519123019515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:00 no-preload-072854 kubelet[1418]: E0828 18:42:00.887994    1418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d5x89" podUID="2f77d1e5-7779-46f9-881d-ff1a6a25098e"
	Aug 28 18:42:09 no-preload-072854 kubelet[1418]: E0828 18:42:09.125513    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870529125034973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:09 no-preload-072854 kubelet[1418]: E0828 18:42:09.127051    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870529125034973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:15 no-preload-072854 kubelet[1418]: E0828 18:42:15.880301    1418 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-d5x89" podUID="2f77d1e5-7779-46f9-881d-ff1a6a25098e"
	Aug 28 18:42:19 no-preload-072854 kubelet[1418]: E0828 18:42:19.130986    1418 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870539130255124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 28 18:42:19 no-preload-072854 kubelet[1418]: E0828 18:42:19.131321    1418 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870539130255124,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] <==
	I0828 18:23:25.152722       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 18:23:25.163272       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 18:23:25.163498       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 18:23:25.171784       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 18:23:25.171969       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-072854_fed95a00-6980-40fc-9ba1-308f96903ec4!
	I0828 18:23:25.175093       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"efda45f4-ed40-4df1-90a2-f5b7fe26e6b6", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-072854_fed95a00-6980-40fc-9ba1-308f96903ec4 became leader
	I0828 18:23:25.274778       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-072854_fed95a00-6980-40fc-9ba1-308f96903ec4!
	
	
	==> storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] <==
	I0828 18:22:54.401916       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0828 18:23:24.404886       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-072854 -n no-preload-072854
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-072854 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-d5x89
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-072854 describe pod metrics-server-6867b74b74-d5x89
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-072854 describe pod metrics-server-6867b74b74-d5x89: exit status 1 (67.352267ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-d5x89" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-072854 describe pod metrics-server-6867b74b74-d5x89: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (362.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (134.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:40:44.865280   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
E0828 18:41:03.311837   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:41:03.673765   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.50.99:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.50.99:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131737 -n old-k8s-version-131737
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 2 (223.056881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-131737" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-131737 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-131737 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.985µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-131737 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 2 (226.292715ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-131737 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-131737 logs -n 25: (1.566366733s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo cat                              | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo                                  | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo find                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-647068 sudo crio                             | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-647068                                       | bridge-647068                | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:13 UTC |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:13 UTC | 28 Aug 24 18:14 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-072854             | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-014980            | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-640552  | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC | 28 Aug 24 18:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:14 UTC |                     |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-072854                  | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-072854                                   | no-preload-072854            | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC | 28 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-131737        | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:16 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-014980                 | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-640552       | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-014980                                  | embed-certs-014980           | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-640552 | jenkins | v1.33.1 | 28 Aug 24 18:17 UTC | 28 Aug 24 18:26 UTC |
	|         | default-k8s-diff-port-640552                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-131737             | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC | 28 Aug 24 18:18 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-131737                              | old-k8s-version-131737       | jenkins | v1.33.1 | 28 Aug 24 18:18 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 18:18:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 18:18:45.197319   77396 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:18:45.197606   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197616   77396 out.go:358] Setting ErrFile to fd 2...
	I0828 18:18:45.197621   77396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:18:45.197793   77396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:18:45.198351   77396 out.go:352] Setting JSON to false
	I0828 18:18:45.199218   77396 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7271,"bootTime":1724861854,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:18:45.199316   77396 start.go:139] virtualization: kvm guest
	I0828 18:18:45.201168   77396 out.go:177] * [old-k8s-version-131737] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:18:45.202252   77396 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:18:45.202312   77396 notify.go:220] Checking for updates...
	I0828 18:18:45.204563   77396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:18:45.205713   77396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:18:45.206652   77396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:18:45.207806   77396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:18:45.208891   77396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:18:45.210308   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:18:45.210717   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.210780   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.225409   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40731
	I0828 18:18:45.225806   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.226318   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.226338   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.226722   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.226903   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.228685   77396 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0828 18:18:45.229863   77396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:18:45.230199   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:18:45.230243   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:18:45.245150   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37169
	I0828 18:18:45.245641   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:18:45.246164   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:18:45.246199   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:18:45.246486   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:18:45.246677   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:18:45.282499   77396 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 18:18:45.283789   77396 start.go:297] selected driver: kvm2
	I0828 18:18:45.283804   77396 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.283918   77396 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:18:45.284594   77396 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.284693   77396 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 18:18:45.299887   77396 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 18:18:45.300236   77396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:18:45.300266   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:18:45.300274   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:18:45.300308   77396 start.go:340] cluster config:
	{Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:18:45.300419   77396 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 18:18:45.302883   77396 out.go:177] * Starting "old-k8s-version-131737" primary control-plane node in "old-k8s-version-131737" cluster
	I0828 18:18:41.610368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:44.682293   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:45.304152   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:18:45.304189   77396 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0828 18:18:45.304208   77396 cache.go:56] Caching tarball of preloaded images
	I0828 18:18:45.304295   77396 preload.go:172] Found /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0828 18:18:45.304305   77396 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0828 18:18:45.304426   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:18:45.304608   77396 start.go:360] acquireMachinesLock for old-k8s-version-131737: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:18:50.762367   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:53.834404   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:18:59.914331   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:02.986351   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:09.066375   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:12.138382   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:18.218324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:21.290324   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:27.370327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:30.442342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:36.522377   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:39.594396   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:45.674327   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:48.746316   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:54.826346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:19:57.898388   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:03.978342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:07.050322   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:13.130368   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:16.202305   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:22.282326   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:25.354374   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:31.434381   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:34.506312   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:40.586353   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:43.658361   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:49.738343   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:52.810329   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:20:58.890346   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:01.962342   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:08.042323   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:11.114385   75908 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.138:22: connect: no route to host
	I0828 18:21:14.118406   76435 start.go:364] duration metric: took 4m0.584080771s to acquireMachinesLock for "embed-certs-014980"
	I0828 18:21:14.118470   76435 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:14.118492   76435 fix.go:54] fixHost starting: 
	I0828 18:21:14.118808   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:14.118834   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:14.134434   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0828 18:21:14.134863   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:14.135369   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:21:14.135398   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:14.135717   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:14.135891   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:14.136052   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:21:14.137681   76435 fix.go:112] recreateIfNeeded on embed-certs-014980: state=Stopped err=<nil>
	I0828 18:21:14.137705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	W0828 18:21:14.137861   76435 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:14.139602   76435 out.go:177] * Restarting existing kvm2 VM for "embed-certs-014980" ...
	I0828 18:21:14.116153   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:14.116188   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116549   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:21:14.116581   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:21:14.116758   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:21:14.118261   75908 machine.go:96] duration metric: took 4m37.42460751s to provisionDockerMachine
	I0828 18:21:14.118302   75908 fix.go:56] duration metric: took 4m37.4457415s for fixHost
	I0828 18:21:14.118309   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 4m37.445770955s
	W0828 18:21:14.118326   75908 start.go:714] error starting host: provision: host is not running
	W0828 18:21:14.118418   75908 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0828 18:21:14.118430   75908 start.go:729] Will try again in 5 seconds ...
	I0828 18:21:14.140812   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Start
	I0828 18:21:14.140967   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring networks are active...
	I0828 18:21:14.141716   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network default is active
	I0828 18:21:14.142021   76435 main.go:141] libmachine: (embed-certs-014980) Ensuring network mk-embed-certs-014980 is active
	I0828 18:21:14.142397   76435 main.go:141] libmachine: (embed-certs-014980) Getting domain xml...
	I0828 18:21:14.143109   76435 main.go:141] libmachine: (embed-certs-014980) Creating domain...
	I0828 18:21:15.352062   76435 main.go:141] libmachine: (embed-certs-014980) Waiting to get IP...
	I0828 18:21:15.352991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.353345   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.353418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.353319   77926 retry.go:31] will retry after 289.130703ms: waiting for machine to come up
	I0828 18:21:15.644017   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.644460   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.644482   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.644434   77926 retry.go:31] will retry after 240.747341ms: waiting for machine to come up
	I0828 18:21:15.886897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:15.887308   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:15.887340   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:15.887258   77926 retry.go:31] will retry after 467.167731ms: waiting for machine to come up
	I0828 18:21:16.355790   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.356204   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.356232   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.356160   77926 retry.go:31] will retry after 506.51967ms: waiting for machine to come up
	I0828 18:21:16.863907   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:16.864309   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:16.864343   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:16.864264   77926 retry.go:31] will retry after 458.679357ms: waiting for machine to come up
	I0828 18:21:17.324948   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.325436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.325462   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.325385   77926 retry.go:31] will retry after 604.433375ms: waiting for machine to come up
	I0828 18:21:17.931169   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:17.931568   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:17.931614   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:17.931507   77926 retry.go:31] will retry after 852.10168ms: waiting for machine to come up
	I0828 18:21:19.120844   75908 start.go:360] acquireMachinesLock for no-preload-072854: {Name:mk7ce4bbf81e21758c0e63f7f98e0a1defc75de0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0828 18:21:18.785312   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:18.785735   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:18.785762   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:18.785682   77926 retry.go:31] will retry after 1.332568679s: waiting for machine to come up
	I0828 18:21:20.119550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:20.119990   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:20.120016   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:20.119947   77926 retry.go:31] will retry after 1.606559109s: waiting for machine to come up
	I0828 18:21:21.727719   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:21.728147   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:21.728175   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:21.728091   77926 retry.go:31] will retry after 1.901370923s: waiting for machine to come up
	I0828 18:21:23.632187   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:23.632554   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:23.632578   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:23.632509   77926 retry.go:31] will retry after 2.387413646s: waiting for machine to come up
	I0828 18:21:26.022576   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:26.022902   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:26.022924   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:26.022862   77926 retry.go:31] will retry after 3.196331032s: waiting for machine to come up
	I0828 18:21:33.374810   76486 start.go:364] duration metric: took 4m17.539072759s to acquireMachinesLock for "default-k8s-diff-port-640552"
	I0828 18:21:33.374877   76486 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:33.374898   76486 fix.go:54] fixHost starting: 
	I0828 18:21:33.375317   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:33.375357   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:33.392734   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0828 18:21:33.393239   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:33.393761   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:21:33.393783   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:33.394131   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:33.394347   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:33.394547   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:21:33.395998   76486 fix.go:112] recreateIfNeeded on default-k8s-diff-port-640552: state=Stopped err=<nil>
	I0828 18:21:33.396038   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	W0828 18:21:33.396210   76486 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:33.398362   76486 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-640552" ...
	I0828 18:21:29.220396   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:29.220861   76435 main.go:141] libmachine: (embed-certs-014980) DBG | unable to find current IP address of domain embed-certs-014980 in network mk-embed-certs-014980
	I0828 18:21:29.220897   76435 main.go:141] libmachine: (embed-certs-014980) DBG | I0828 18:21:29.220820   77926 retry.go:31] will retry after 2.802196616s: waiting for machine to come up
	I0828 18:21:32.026808   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027298   76435 main.go:141] libmachine: (embed-certs-014980) Found IP for machine: 192.168.72.130
	I0828 18:21:32.027319   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has current primary IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.027325   76435 main.go:141] libmachine: (embed-certs-014980) Reserving static IP address...
	I0828 18:21:32.027698   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.027764   76435 main.go:141] libmachine: (embed-certs-014980) DBG | skip adding static IP to network mk-embed-certs-014980 - found existing host DHCP lease matching {name: "embed-certs-014980", mac: "52:54:00:4c:61:8f", ip: "192.168.72.130"}
	I0828 18:21:32.027781   76435 main.go:141] libmachine: (embed-certs-014980) Reserved static IP address: 192.168.72.130
	I0828 18:21:32.027800   76435 main.go:141] libmachine: (embed-certs-014980) Waiting for SSH to be available...
	I0828 18:21:32.027814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Getting to WaitForSSH function...
	I0828 18:21:32.029740   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030020   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.030051   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.030171   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH client type: external
	I0828 18:21:32.030200   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa (-rw-------)
	I0828 18:21:32.030235   76435 main.go:141] libmachine: (embed-certs-014980) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.130 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:32.030249   76435 main.go:141] libmachine: (embed-certs-014980) DBG | About to run SSH command:
	I0828 18:21:32.030264   76435 main.go:141] libmachine: (embed-certs-014980) DBG | exit 0
	I0828 18:21:32.153760   76435 main.go:141] libmachine: (embed-certs-014980) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:32.154184   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetConfigRaw
	I0828 18:21:32.154807   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.157116   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157449   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.157472   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.157662   76435 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/config.json ...
	I0828 18:21:32.157857   76435 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:32.157873   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:32.158051   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.160224   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160516   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.160550   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.160705   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.160877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.160999   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.161141   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.161310   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.161509   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.161528   76435 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:32.270041   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:32.270070   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270351   76435 buildroot.go:166] provisioning hostname "embed-certs-014980"
	I0828 18:21:32.270375   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.270568   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.273124   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273480   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.273509   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.273626   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.273774   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.273941   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.274062   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.274264   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.274435   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.274448   76435 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-014980 && echo "embed-certs-014980" | sudo tee /etc/hostname
	I0828 18:21:32.401452   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-014980
	
	I0828 18:21:32.401473   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.404278   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404622   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.404672   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.404785   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.405012   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405167   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.405312   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.405525   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.405697   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.405714   76435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-014980' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-014980/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-014980' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:32.523970   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:32.523997   76435 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:32.524044   76435 buildroot.go:174] setting up certificates
	I0828 18:21:32.524054   76435 provision.go:84] configureAuth start
	I0828 18:21:32.524063   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetMachineName
	I0828 18:21:32.524374   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:32.527040   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527391   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.527418   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.527540   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.529680   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.529986   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.530006   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.530170   76435 provision.go:143] copyHostCerts
	I0828 18:21:32.530220   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:32.530237   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:32.530306   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:32.530387   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:32.530399   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:32.530423   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:32.530475   76435 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:32.530481   76435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:32.530502   76435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:32.530556   76435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.embed-certs-014980 san=[127.0.0.1 192.168.72.130 embed-certs-014980 localhost minikube]
	I0828 18:21:32.755911   76435 provision.go:177] copyRemoteCerts
	I0828 18:21:32.755967   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:32.755990   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.758640   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.758944   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.758981   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.759123   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.759306   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.759442   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.759554   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:32.843219   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:32.867929   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0828 18:21:32.890143   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:32.911983   76435 provision.go:87] duration metric: took 387.917809ms to configureAuth
	I0828 18:21:32.912013   76435 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:32.912199   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:32.912281   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:32.914814   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915154   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:32.915188   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:32.915321   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:32.915550   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915717   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:32.915899   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:32.916116   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:32.916323   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:32.916378   76435 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:33.137477   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:33.137500   76435 machine.go:96] duration metric: took 979.632081ms to provisionDockerMachine
	I0828 18:21:33.137513   76435 start.go:293] postStartSetup for "embed-certs-014980" (driver="kvm2")
	I0828 18:21:33.137526   76435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:33.137564   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.137847   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:33.137877   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.140267   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140555   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.140584   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.140731   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.140922   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.141078   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.141223   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.224499   76435 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:33.228643   76435 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:33.228672   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:33.228755   76435 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:33.228855   76435 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:33.229038   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:33.238208   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:33.260348   76435 start.go:296] duration metric: took 122.819807ms for postStartSetup
	I0828 18:21:33.260400   76435 fix.go:56] duration metric: took 19.141917324s for fixHost
	I0828 18:21:33.260424   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.262763   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263139   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.263168   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.263289   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.263482   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263659   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.263871   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.264050   76435 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:33.264216   76435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.130 22 <nil> <nil>}
	I0828 18:21:33.264226   76435 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:33.374640   76435 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869293.352212530
	
	I0828 18:21:33.374664   76435 fix.go:216] guest clock: 1724869293.352212530
	I0828 18:21:33.374687   76435 fix.go:229] Guest: 2024-08-28 18:21:33.35221253 +0000 UTC Remote: 2024-08-28 18:21:33.260405829 +0000 UTC m=+259.867297948 (delta=91.806701ms)
	I0828 18:21:33.374708   76435 fix.go:200] guest clock delta is within tolerance: 91.806701ms
	I0828 18:21:33.374713   76435 start.go:83] releasing machines lock for "embed-certs-014980", held for 19.256266619s
	I0828 18:21:33.374735   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.374991   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:33.377975   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378411   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.378436   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.378623   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379150   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379317   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:21:33.379409   76435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:33.379465   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.379568   76435 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:33.379594   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:21:33.381991   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382015   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382323   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382354   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:33.382379   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382438   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:33.382493   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382687   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:21:33.382694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382876   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:21:33.382907   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383029   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:21:33.383033   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.383145   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:21:33.508142   76435 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:33.514436   76435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:33.661055   76435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:33.666718   76435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:33.666774   76435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:33.683142   76435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:33.683169   76435 start.go:495] detecting cgroup driver to use...
	I0828 18:21:33.683253   76435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:33.698356   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:33.711626   76435 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:33.711690   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:33.724743   76435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:33.738782   76435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:33.852946   76435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:33.990370   76435 docker.go:233] disabling docker service ...
	I0828 18:21:33.990440   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:34.004746   76435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:34.017220   76435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:34.174534   76435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:34.320863   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:34.333880   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:34.351859   76435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:34.351907   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.362142   76435 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:34.362223   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.372261   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.382374   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.396994   76435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:34.412126   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.422585   76435 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.439314   76435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:34.449667   76435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:34.458389   76435 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:34.458449   76435 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:34.471501   76435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:34.480915   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:34.617633   76435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:34.731432   76435 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:34.731508   76435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:34.736417   76435 start.go:563] Will wait 60s for crictl version
	I0828 18:21:34.736464   76435 ssh_runner.go:195] Run: which crictl
	I0828 18:21:34.740213   76435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:34.776804   76435 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:34.776908   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.806826   76435 ssh_runner.go:195] Run: crio --version
	I0828 18:21:34.837961   76435 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:33.399527   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Start
	I0828 18:21:33.399696   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring networks are active...
	I0828 18:21:33.400382   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network default is active
	I0828 18:21:33.400737   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Ensuring network mk-default-k8s-diff-port-640552 is active
	I0828 18:21:33.401099   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Getting domain xml...
	I0828 18:21:33.401809   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Creating domain...
	I0828 18:21:34.684850   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting to get IP...
	I0828 18:21:34.685612   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.685998   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.686063   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.685980   78067 retry.go:31] will retry after 291.65765ms: waiting for machine to come up
	I0828 18:21:34.979550   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980029   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:34.980051   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:34.979993   78067 retry.go:31] will retry after 274.75755ms: waiting for machine to come up
	I0828 18:21:35.256257   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256724   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.256752   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.256666   78067 retry.go:31] will retry after 455.404257ms: waiting for machine to come up
	I0828 18:21:35.714147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714683   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:35.714716   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:35.714635   78067 retry.go:31] will retry after 426.56406ms: waiting for machine to come up
	I0828 18:21:34.839157   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetIP
	I0828 18:21:34.842000   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842417   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:21:34.842443   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:21:34.842650   76435 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:34.846628   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:34.859098   76435 kubeadm.go:883] updating cluster {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:34.859212   76435 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:34.859259   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:34.898150   76435 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:34.898233   76435 ssh_runner.go:195] Run: which lz4
	I0828 18:21:34.902220   76435 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:34.906463   76435 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:34.906498   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:36.168426   76435 crio.go:462] duration metric: took 1.26624881s to copy over tarball
	I0828 18:21:36.168514   76435 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:38.266205   76435 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.097659696s)
	I0828 18:21:38.266252   76435 crio.go:469] duration metric: took 2.097775234s to extract the tarball
	I0828 18:21:38.266264   76435 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:38.302870   76435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:38.349495   76435 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:38.349527   76435 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:38.349538   76435 kubeadm.go:934] updating node { 192.168.72.130 8443 v1.31.0 crio true true} ...
	I0828 18:21:38.349672   76435 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-014980 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.130
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:38.349761   76435 ssh_runner.go:195] Run: crio config
	I0828 18:21:38.393310   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:38.393333   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:38.393346   76435 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:38.393367   76435 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.130 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-014980 NodeName:embed-certs-014980 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.130"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.130 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:38.393502   76435 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.130
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-014980"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.130
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.130"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:38.393561   76435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:38.403059   76435 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:38.403128   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:38.411944   76435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0828 18:21:38.427006   76435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:36.143403   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.143961   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.143901   78067 retry.go:31] will retry after 623.404625ms: waiting for machine to come up
	I0828 18:21:36.768738   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:36.769339   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:36.769256   78067 retry.go:31] will retry after 750.082443ms: waiting for machine to come up
	I0828 18:21:37.521122   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521604   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:37.521633   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:37.521562   78067 retry.go:31] will retry after 837.989492ms: waiting for machine to come up
	I0828 18:21:38.361659   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362111   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:38.362140   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:38.362056   78067 retry.go:31] will retry after 1.13122193s: waiting for machine to come up
	I0828 18:21:39.495248   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495643   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:39.495673   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:39.495578   78067 retry.go:31] will retry after 1.180862235s: waiting for machine to come up
	I0828 18:21:40.677748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678090   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:40.678117   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:40.678045   78067 retry.go:31] will retry after 2.245023454s: waiting for machine to come up
	I0828 18:21:38.441960   76435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0828 18:21:38.457509   76435 ssh_runner.go:195] Run: grep 192.168.72.130	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:38.461003   76435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.130	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:38.472232   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:38.591387   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:38.606911   76435 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980 for IP: 192.168.72.130
	I0828 18:21:38.606935   76435 certs.go:194] generating shared ca certs ...
	I0828 18:21:38.606957   76435 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:38.607122   76435 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:38.607186   76435 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:38.607199   76435 certs.go:256] generating profile certs ...
	I0828 18:21:38.607304   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/client.key
	I0828 18:21:38.607398   76435 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key.f4b1f9f1
	I0828 18:21:38.607449   76435 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key
	I0828 18:21:38.607595   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:38.607634   76435 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:38.607646   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:38.607679   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:38.607726   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:38.607756   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:38.607808   76435 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:38.608698   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:38.647796   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:38.685835   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:38.738515   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:38.769248   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0828 18:21:38.795091   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:38.816857   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:38.839153   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/embed-certs-014980/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:38.861024   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:38.882488   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:38.905023   76435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:38.927997   76435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:38.945870   76435 ssh_runner.go:195] Run: openssl version
	I0828 18:21:38.951753   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:38.962635   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966847   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.966895   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:38.972529   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:38.982689   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:38.992812   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996942   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:38.996991   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:39.002359   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:39.012423   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:39.022765   76435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.026945   76435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.027007   76435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:39.032233   76435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:39.042709   76435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:39.046904   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:39.052563   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:39.057937   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:39.063465   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:39.068788   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:39.074233   76435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:39.079673   76435 kubeadm.go:392] StartCluster: {Name:embed-certs-014980 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-014980 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:39.079776   76435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:39.079824   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.120250   76435 cri.go:89] found id: ""
	I0828 18:21:39.120331   76435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:39.130147   76435 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:39.130171   76435 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:39.130223   76435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:39.139586   76435 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:39.140642   76435 kubeconfig.go:125] found "embed-certs-014980" server: "https://192.168.72.130:8443"
	I0828 18:21:39.142695   76435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:39.152102   76435 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.130
	I0828 18:21:39.152136   76435 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:39.152149   76435 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:39.152225   76435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:39.189811   76435 cri.go:89] found id: ""
	I0828 18:21:39.189899   76435 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:39.205579   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:39.215378   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:39.215401   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:39.215451   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:21:39.225068   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:39.225136   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:39.234254   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:21:39.243009   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:39.243072   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:39.252251   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.261241   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:39.261314   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:39.270443   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:21:39.278999   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:39.279070   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:39.288033   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:39.297331   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:39.396232   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.225819   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.420586   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.482893   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:40.601563   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:40.601672   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.101955   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:41.602572   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.102180   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.602520   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:42.635705   76435 api_server.go:72] duration metric: took 2.034151361s to wait for apiserver process to appear ...
	I0828 18:21:42.635738   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:21:42.635762   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.636263   76435 api_server.go:269] stopped: https://192.168.72.130:8443/healthz: Get "https://192.168.72.130:8443/healthz": dial tcp 192.168.72.130:8443: connect: connection refused
	I0828 18:21:43.136019   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:42.925748   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926265   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:42.926293   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:42.926217   78067 retry.go:31] will retry after 2.565646238s: waiting for machine to come up
	I0828 18:21:45.494477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495032   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:45.495058   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:45.494982   78067 retry.go:31] will retry after 2.418376782s: waiting for machine to come up
	I0828 18:21:45.980398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:45.980429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:45.980444   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.010352   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:21:46.010385   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:21:46.136576   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.141398   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.141429   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:46.635898   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:46.641672   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:46.641712   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.136295   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.142623   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:21:47.142657   76435 api_server.go:103] status: https://192.168.72.130:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:21:47.636199   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:21:47.640325   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:21:47.647198   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:21:47.647226   76435 api_server.go:131] duration metric: took 5.011481159s to wait for apiserver health ...
	I0828 18:21:47.647236   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:21:47.647245   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:47.649638   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:21:47.650998   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:21:47.662361   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:21:47.683446   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:21:47.696066   76435 system_pods.go:59] 8 kube-system pods found
	I0828 18:21:47.696100   76435 system_pods.go:61] "coredns-6f6b679f8f-4g2n8" [9c34e013-4c11-4805-9d58-987bb130f1b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:21:47.696120   76435 system_pods.go:61] "etcd-embed-certs-014980" [164f2ce3-8df6-4e56-a959-80b08848a181] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:21:47.696133   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [c637e3e0-4e54-44b1-8eb7-ea11d3b5753a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:21:47.696143   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [2d786cc0-a0c7-430c-89e1-9889e919289d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:21:47.696149   76435 system_pods.go:61] "kube-proxy-4lz5q" [a5f2213b-6b36-4656-8a26-26903bc09441] Running
	I0828 18:21:47.696158   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [2aa3787a-7a70-4cfb-8810-9f4e02240012] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:21:47.696167   76435 system_pods.go:61] "metrics-server-6867b74b74-f56j2" [91d30fa3-cc63-4d61-8cd3-46ecc950c31f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:21:47.696176   76435 system_pods.go:61] "storage-provisioner" [54d357f5-8f8a-429b-81db-40c9f2857fde] Running
	I0828 18:21:47.696185   76435 system_pods.go:74] duration metric: took 12.718326ms to wait for pod list to return data ...
	I0828 18:21:47.696198   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:21:47.699492   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:21:47.699515   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:21:47.699528   76435 node_conditions.go:105] duration metric: took 3.324668ms to run NodePressure ...
	I0828 18:21:47.699548   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:47.970122   76435 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973854   76435 kubeadm.go:739] kubelet initialised
	I0828 18:21:47.973874   76435 kubeadm.go:740] duration metric: took 3.724056ms waiting for restarted kubelet to initialise ...
	I0828 18:21:47.973881   76435 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:21:47.978377   76435 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:21:47.916599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.916976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | unable to find current IP address of domain default-k8s-diff-port-640552 in network mk-default-k8s-diff-port-640552
	I0828 18:21:47.917015   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | I0828 18:21:47.916941   78067 retry.go:31] will retry after 3.1564792s: waiting for machine to come up
	I0828 18:21:52.286982   77396 start.go:364] duration metric: took 3m6.98234152s to acquireMachinesLock for "old-k8s-version-131737"
	I0828 18:21:52.287057   77396 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:21:52.287069   77396 fix.go:54] fixHost starting: 
	I0828 18:21:52.287554   77396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:21:52.287595   77396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:21:52.305954   77396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36257
	I0828 18:21:52.306439   77396 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:21:52.306908   77396 main.go:141] libmachine: Using API Version  1
	I0828 18:21:52.306928   77396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:21:52.307228   77396 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:21:52.307404   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:21:52.307571   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetState
	I0828 18:21:52.309284   77396 fix.go:112] recreateIfNeeded on old-k8s-version-131737: state=Stopped err=<nil>
	I0828 18:21:52.309322   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	W0828 18:21:52.309508   77396 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:21:52.311369   77396 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-131737" ...
	I0828 18:21:49.984379   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.985536   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:51.075186   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.075681   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Found IP for machine: 192.168.39.226
	I0828 18:21:51.075698   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserving static IP address...
	I0828 18:21:51.075746   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has current primary IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.076159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.076184   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | skip adding static IP to network mk-default-k8s-diff-port-640552 - found existing host DHCP lease matching {name: "default-k8s-diff-port-640552", mac: "52:54:00:84:6b:cd", ip: "192.168.39.226"}
	I0828 18:21:51.076201   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Reserved static IP address: 192.168.39.226
	I0828 18:21:51.076218   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Waiting for SSH to be available...
	I0828 18:21:51.076230   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Getting to WaitForSSH function...
	I0828 18:21:51.078435   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078745   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.078766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.078967   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH client type: external
	I0828 18:21:51.079000   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa (-rw-------)
	I0828 18:21:51.079053   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:21:51.079079   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | About to run SSH command:
	I0828 18:21:51.079114   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | exit 0
	I0828 18:21:51.205844   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | SSH cmd err, output: <nil>: 
	I0828 18:21:51.206145   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetConfigRaw
	I0828 18:21:51.206821   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.209159   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.209563   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.209753   76486 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/config.json ...
	I0828 18:21:51.209980   76486 machine.go:93] provisionDockerMachine start ...
	I0828 18:21:51.209999   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:51.210244   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.212321   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212651   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.212677   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.212800   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.212971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.213273   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.213408   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.213639   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.213650   76486 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:21:51.330211   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:21:51.330249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330530   76486 buildroot.go:166] provisioning hostname "default-k8s-diff-port-640552"
	I0828 18:21:51.330558   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.330820   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.333492   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.333855   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.333885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.334027   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.334249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334469   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.334658   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.334844   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.335003   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.335015   76486 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-640552 && echo "default-k8s-diff-port-640552" | sudo tee /etc/hostname
	I0828 18:21:51.459660   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-640552
	
	I0828 18:21:51.459690   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.462286   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462636   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.462668   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.462842   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.463034   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463181   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.463307   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.463470   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.463650   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.463682   76486 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-640552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-640552/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-640552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:21:51.581714   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:21:51.581740   76486 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:21:51.581777   76486 buildroot.go:174] setting up certificates
	I0828 18:21:51.581792   76486 provision.go:84] configureAuth start
	I0828 18:21:51.581807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetMachineName
	I0828 18:21:51.582130   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:51.584626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.584945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.584976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.585073   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.587285   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587672   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.587700   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.587868   76486 provision.go:143] copyHostCerts
	I0828 18:21:51.587926   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:21:51.587946   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:21:51.588003   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:21:51.588092   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:21:51.588100   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:21:51.588124   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:21:51.588244   76486 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:21:51.588255   76486 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:21:51.588277   76486 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:21:51.588332   76486 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-640552 san=[127.0.0.1 192.168.39.226 default-k8s-diff-port-640552 localhost minikube]
	I0828 18:21:51.657408   76486 provision.go:177] copyRemoteCerts
	I0828 18:21:51.657457   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:21:51.657480   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.660152   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660494   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.660514   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.660709   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.660911   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.661078   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.661251   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:51.751729   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:21:51.773473   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0828 18:21:51.796174   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:21:51.817640   76486 provision.go:87] duration metric: took 235.828003ms to configureAuth
	I0828 18:21:51.817672   76486 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:21:51.817892   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:21:51.817983   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:51.820433   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.820780   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:51.820807   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:51.821016   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:51.821214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821371   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:51.821533   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:51.821684   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:51.821852   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:51.821870   76486 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:21:52.048026   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:21:52.048055   76486 machine.go:96] duration metric: took 838.061836ms to provisionDockerMachine
	I0828 18:21:52.048067   76486 start.go:293] postStartSetup for "default-k8s-diff-port-640552" (driver="kvm2")
	I0828 18:21:52.048078   76486 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:21:52.048097   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.048437   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:21:52.048472   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.051115   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051385   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.051410   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.051597   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.051815   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.051971   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.052066   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.136350   76486 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:21:52.140200   76486 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:21:52.140228   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:21:52.140303   76486 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:21:52.140397   76486 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:21:52.140496   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:21:52.149451   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:52.172381   76486 start.go:296] duration metric: took 124.302384ms for postStartSetup
	I0828 18:21:52.172450   76486 fix.go:56] duration metric: took 18.797536411s for fixHost
	I0828 18:21:52.172477   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.174891   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175255   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.175274   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.175474   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.175631   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175788   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.175945   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.176100   76486 main.go:141] libmachine: Using SSH client type: native
	I0828 18:21:52.176279   76486 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.226 22 <nil> <nil>}
	I0828 18:21:52.176289   76486 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:21:52.286801   76486 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869312.259614140
	
	I0828 18:21:52.286827   76486 fix.go:216] guest clock: 1724869312.259614140
	I0828 18:21:52.286835   76486 fix.go:229] Guest: 2024-08-28 18:21:52.25961414 +0000 UTC Remote: 2024-08-28 18:21:52.172457684 +0000 UTC m=+276.471609311 (delta=87.156456ms)
	I0828 18:21:52.286854   76486 fix.go:200] guest clock delta is within tolerance: 87.156456ms
	I0828 18:21:52.286859   76486 start.go:83] releasing machines lock for "default-k8s-diff-port-640552", held for 18.912007294s
	I0828 18:21:52.286884   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.287148   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:52.289951   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290346   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.290370   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.290500   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.290976   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291147   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:21:52.291228   76486 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:21:52.291282   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.291325   76486 ssh_runner.go:195] Run: cat /version.json
	I0828 18:21:52.291344   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:21:52.294010   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294039   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294464   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294490   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294599   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:52.294637   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:52.294685   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294885   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:21:52.294896   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295146   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295185   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:21:52.295331   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:21:52.295326   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.295560   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:21:52.380284   76486 ssh_runner.go:195] Run: systemctl --version
	I0828 18:21:52.421868   76486 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:21:52.563478   76486 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:21:52.569318   76486 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:21:52.569408   76486 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:21:52.585683   76486 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:21:52.585709   76486 start.go:495] detecting cgroup driver to use...
	I0828 18:21:52.585781   76486 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:21:52.603511   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:21:52.616868   76486 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:21:52.616930   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:21:52.631574   76486 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:21:52.644913   76486 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:21:52.762863   76486 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:21:52.920107   76486 docker.go:233] disabling docker service ...
	I0828 18:21:52.920183   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:21:52.937155   76486 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:21:52.951124   76486 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:21:53.063496   76486 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:21:53.187655   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:21:53.201452   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:21:53.219663   76486 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:21:53.219734   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.230165   76486 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:21:53.230247   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.240480   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.251258   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.262763   76486 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:21:53.273597   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.283571   76486 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.302935   76486 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:21:53.313508   76486 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:21:53.322781   76486 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:21:53.322850   76486 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:21:53.337049   76486 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:21:53.347349   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:53.455027   76486 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:21:53.551547   76486 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:21:53.551607   76486 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:21:53.556960   76486 start.go:563] Will wait 60s for crictl version
	I0828 18:21:53.557066   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:21:53.560695   76486 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:21:53.603636   76486 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:21:53.603721   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.632017   76486 ssh_runner.go:195] Run: crio --version
	I0828 18:21:53.664760   76486 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:21:52.312648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .Start
	I0828 18:21:52.312862   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring networks are active...
	I0828 18:21:52.313682   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network default is active
	I0828 18:21:52.314112   77396 main.go:141] libmachine: (old-k8s-version-131737) Ensuring network mk-old-k8s-version-131737 is active
	I0828 18:21:52.314488   77396 main.go:141] libmachine: (old-k8s-version-131737) Getting domain xml...
	I0828 18:21:52.315180   77396 main.go:141] libmachine: (old-k8s-version-131737) Creating domain...
	I0828 18:21:53.582013   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting to get IP...
	I0828 18:21:53.583124   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.583609   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.583672   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.583582   78246 retry.go:31] will retry after 289.679773ms: waiting for machine to come up
	I0828 18:21:53.875299   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:53.876115   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:53.876144   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:53.876051   78246 retry.go:31] will retry after 263.317798ms: waiting for machine to come up
	I0828 18:21:54.141733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.142310   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.142340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.142257   78246 retry.go:31] will retry after 440.224905ms: waiting for machine to come up
	I0828 18:21:54.584505   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.585061   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.585084   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.585018   78246 retry.go:31] will retry after 379.546405ms: waiting for machine to come up
	I0828 18:21:54.966516   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:54.967130   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:54.967153   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:54.967045   78246 retry.go:31] will retry after 754.463377ms: waiting for machine to come up
	I0828 18:21:53.665810   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetIP
	I0828 18:21:53.668882   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669330   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:21:53.669352   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:21:53.669589   76486 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0828 18:21:53.673693   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:53.685432   76486 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:21:53.685546   76486 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:21:53.685593   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:53.720069   76486 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:21:53.720129   76486 ssh_runner.go:195] Run: which lz4
	I0828 18:21:53.723841   76486 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:21:53.727666   76486 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:21:53.727697   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0828 18:21:54.993725   76486 crio.go:462] duration metric: took 1.269921848s to copy over tarball
	I0828 18:21:54.993802   76486 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:21:53.987677   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:56.485568   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:21:55.723533   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:55.724021   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:55.724042   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:55.723980   78246 retry.go:31] will retry after 607.743145ms: waiting for machine to come up
	I0828 18:21:56.333733   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:56.334181   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:56.334210   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:56.334135   78246 retry.go:31] will retry after 1.098394488s: waiting for machine to come up
	I0828 18:21:57.433729   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:57.434212   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:57.434243   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:57.434157   78246 retry.go:31] will retry after 1.195993343s: waiting for machine to come up
	I0828 18:21:58.631451   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:21:58.631839   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:21:58.631867   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:21:58.631798   78246 retry.go:31] will retry after 1.807712472s: waiting for machine to come up
	I0828 18:21:57.135009   76486 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.141177811s)
	I0828 18:21:57.135041   76486 crio.go:469] duration metric: took 2.141292479s to extract the tarball
	I0828 18:21:57.135051   76486 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:21:57.172381   76486 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:21:57.211971   76486 crio.go:514] all images are preloaded for cri-o runtime.
	I0828 18:21:57.211993   76486 cache_images.go:84] Images are preloaded, skipping loading
	I0828 18:21:57.212003   76486 kubeadm.go:934] updating node { 192.168.39.226 8444 v1.31.0 crio true true} ...
	I0828 18:21:57.212123   76486 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-640552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:21:57.212202   76486 ssh_runner.go:195] Run: crio config
	I0828 18:21:57.254347   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:21:57.254378   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:21:57.254402   76486 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:21:57.254431   76486 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.226 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-640552 NodeName:default-k8s-diff-port-640552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:21:57.254630   76486 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.226
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-640552"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:21:57.254715   76486 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:21:57.264233   76486 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:21:57.264304   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:21:57.273293   76486 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0828 18:21:57.289211   76486 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:21:57.304829   76486 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0828 18:21:57.323062   76486 ssh_runner.go:195] Run: grep 192.168.39.226	control-plane.minikube.internal$ /etc/hosts
	I0828 18:21:57.326891   76486 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:21:57.339775   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:21:57.463802   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:21:57.479266   76486 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552 for IP: 192.168.39.226
	I0828 18:21:57.479288   76486 certs.go:194] generating shared ca certs ...
	I0828 18:21:57.479325   76486 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:21:57.479519   76486 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:21:57.479570   76486 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:21:57.479584   76486 certs.go:256] generating profile certs ...
	I0828 18:21:57.479702   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/client.key
	I0828 18:21:57.479774   76486 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key.90f46fd7
	I0828 18:21:57.479829   76486 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key
	I0828 18:21:57.479977   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:21:57.480018   76486 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:21:57.480031   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:21:57.480071   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:21:57.480109   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:21:57.480142   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:21:57.480199   76486 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:21:57.481063   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:21:57.514802   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:21:57.555506   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:21:57.585381   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:21:57.613009   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0828 18:21:57.637776   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:21:57.662590   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:21:57.684482   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/default-k8s-diff-port-640552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:21:57.707287   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:21:57.728392   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:21:57.750217   76486 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:21:57.771310   76486 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:21:57.786814   76486 ssh_runner.go:195] Run: openssl version
	I0828 18:21:57.792053   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:21:57.802301   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806552   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.806627   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:21:57.812238   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:21:57.824231   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:21:57.834783   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.838954   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.839008   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:21:57.844456   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:21:57.856262   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:21:57.867737   76486 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872040   76486 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.872122   76486 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:21:57.877506   76486 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:21:57.889018   76486 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:21:57.893303   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:21:57.899199   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:21:57.907716   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:21:57.915801   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:21:57.923795   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:21:57.929601   76486 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:21:57.935563   76486 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-640552 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-640552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:21:57.935655   76486 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:21:57.935698   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:57.975236   76486 cri.go:89] found id: ""
	I0828 18:21:57.975308   76486 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:21:57.986945   76486 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:21:57.986966   76486 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:21:57.987014   76486 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:21:57.996355   76486 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:21:57.997293   76486 kubeconfig.go:125] found "default-k8s-diff-port-640552" server: "https://192.168.39.226:8444"
	I0828 18:21:57.999257   76486 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:21:58.008531   76486 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.226
	I0828 18:21:58.008555   76486 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:21:58.008564   76486 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:21:58.008612   76486 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:21:58.054603   76486 cri.go:89] found id: ""
	I0828 18:21:58.054681   76486 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:21:58.072017   76486 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:21:58.085982   76486 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:21:58.086007   76486 kubeadm.go:157] found existing configuration files:
	
	I0828 18:21:58.086087   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0828 18:21:58.094721   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:21:58.094790   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:21:58.108457   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0828 18:21:58.120495   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:21:58.120568   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:21:58.130432   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.139428   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:21:58.139495   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:21:58.148537   76486 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0828 18:21:58.157182   76486 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:21:58.157241   76486 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:21:58.166178   76486 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:21:58.175176   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:58.276043   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.072360   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.270937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.344719   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:21:59.442568   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:21:59.442664   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:59.942860   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:00.443271   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:21:58.485632   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:00.694313   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:00.694341   76435 pod_ready.go:82] duration metric: took 12.71594065s for pod "coredns-6f6b679f8f-4g2n8" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.694354   76435 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210752   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.210805   76435 pod_ready.go:82] duration metric: took 516.442507ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.210821   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218781   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.218809   76435 pod_ready.go:82] duration metric: took 7.979295ms for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.218823   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725883   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.725914   76435 pod_ready.go:82] duration metric: took 507.08194ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.725932   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731866   76435 pod_ready.go:93] pod "kube-proxy-4lz5q" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.731891   76435 pod_ready.go:82] duration metric: took 5.951323ms for pod "kube-proxy-4lz5q" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.731903   76435 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737160   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:01.737191   76435 pod_ready.go:82] duration metric: took 5.279341ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:01.737203   76435 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:00.441679   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:00.442149   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:00.442178   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:00.442063   78246 retry.go:31] will retry after 2.175897132s: waiting for machine to come up
	I0828 18:22:02.620076   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:02.620562   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:02.620589   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:02.620527   78246 retry.go:31] will retry after 1.749248103s: waiting for machine to come up
	I0828 18:22:04.371390   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:04.371924   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:04.371969   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:04.371875   78246 retry.go:31] will retry after 2.412168623s: waiting for machine to come up
	I0828 18:22:00.943566   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.443708   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.943361   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:01.957227   76486 api_server.go:72] duration metric: took 2.514666609s to wait for apiserver process to appear ...
	I0828 18:22:01.957258   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:01.957281   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.174923   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.174955   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.174970   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.227506   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:04.227540   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:04.457869   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.463842   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.463884   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:04.957398   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:04.964576   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:04.964606   76486 api_server.go:103] status: https://192.168.39.226:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:05.457724   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:22:05.461808   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:22:05.467732   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:05.467757   76486 api_server.go:131] duration metric: took 3.510492089s to wait for apiserver health ...
	I0828 18:22:05.467766   76486 cni.go:84] Creating CNI manager for ""
	I0828 18:22:05.467771   76486 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:05.469553   76486 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:05.470759   76486 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:05.481858   76486 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:05.500789   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:05.512306   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:05.512336   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:05.512343   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:05.512353   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:05.512360   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:05.512368   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:05.512379   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:05.512386   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:05.512396   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:05.512405   76486 system_pods.go:74] duration metric: took 11.592471ms to wait for pod list to return data ...
	I0828 18:22:05.512419   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:05.516136   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:05.516167   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:05.516182   76486 node_conditions.go:105] duration metric: took 3.757746ms to run NodePressure ...
	I0828 18:22:05.516205   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:05.793448   76486 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798810   76486 kubeadm.go:739] kubelet initialised
	I0828 18:22:05.798827   76486 kubeadm.go:740] duration metric: took 5.35696ms waiting for restarted kubelet to initialise ...
	I0828 18:22:05.798835   76486 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:05.803644   76486 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.808185   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808206   76486 pod_ready.go:82] duration metric: took 4.541551ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.808214   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.808226   76486 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.812918   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812941   76486 pod_ready.go:82] duration metric: took 4.703246ms for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.812950   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.812956   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.817019   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817036   76486 pod_ready.go:82] duration metric: took 4.075009ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.817045   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.817050   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:05.904575   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904606   76486 pod_ready.go:82] duration metric: took 87.547744ms for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:05.904621   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:05.904628   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.304141   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304168   76486 pod_ready.go:82] duration metric: took 399.53302ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.304177   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-proxy-lmpft" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.304182   76486 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:06.704632   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704663   76486 pod_ready.go:82] duration metric: took 400.470144ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:06.704677   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:06.704686   76486 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:07.104218   76486 pod_ready.go:98] node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104247   76486 pod_ready.go:82] duration metric: took 399.550393ms for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:07.104261   76486 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-640552" hosting pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:07.104270   76486 pod_ready.go:39] duration metric: took 1.305425633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:07.104296   76486 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:07.115055   76486 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:07.115077   76486 kubeadm.go:597] duration metric: took 9.128104649s to restartPrimaryControlPlane
	I0828 18:22:07.115085   76486 kubeadm.go:394] duration metric: took 9.179528813s to StartCluster
	I0828 18:22:07.115105   76486 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.115169   76486 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:07.116738   76486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:07.116962   76486 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.226 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:07.117026   76486 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:07.117104   76486 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117121   76486 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117136   76486 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117150   76486 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:07.117175   76486 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-640552"
	I0828 18:22:07.117185   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117191   76486 config.go:182] Loaded profile config "default-k8s-diff-port-640552": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:07.117166   76486 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-640552"
	I0828 18:22:07.117280   76486 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.117291   76486 addons.go:243] addon metrics-server should already be in state true
	I0828 18:22:07.117316   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.117551   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117585   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117607   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117622   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.117666   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.117687   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.118665   76486 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:07.119962   76486 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0828 18:22:07.132877   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35151
	I0828 18:22:07.133468   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133474   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.133473   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
	I0828 18:22:07.133904   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.134022   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134039   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134044   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134055   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134378   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.134405   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.134416   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134425   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134582   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.134742   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.134990   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135019   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.135331   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.135358   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.142488   76486 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-640552"
	W0828 18:22:07.142508   76486 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:07.142534   76486 host.go:66] Checking if "default-k8s-diff-port-640552" exists ...
	I0828 18:22:07.142790   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.142845   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.151553   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43163
	I0828 18:22:07.152067   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.152561   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.152578   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.152988   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.153172   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.153267   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39079
	I0828 18:22:07.153647   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.154071   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.154118   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.154424   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.154657   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.155656   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.156384   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.158167   76486 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:07.158170   76486 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:03.743115   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:06.246448   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:07.159313   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0828 18:22:07.159655   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.159730   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:07.159748   76486 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:07.159766   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.159877   76486 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.159893   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:07.159908   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.160069   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.160087   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.160501   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.160999   76486 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:07.161042   76486 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:07.163522   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163560   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163954   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163960   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.163980   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.163989   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.164249   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164312   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.164451   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164455   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.164575   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164626   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.164746   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.164806   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.177679   76486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39237
	I0828 18:22:07.178179   76486 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:07.178711   76486 main.go:141] libmachine: Using API Version  1
	I0828 18:22:07.178732   76486 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:07.179027   76486 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:07.179214   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetState
	I0828 18:22:07.180671   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .DriverName
	I0828 18:22:07.180897   76486 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.180912   76486 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:07.180931   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHHostname
	I0828 18:22:07.183194   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183530   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:6b:cd", ip: ""} in network mk-default-k8s-diff-port-640552: {Iface:virbr1 ExpiryTime:2024-08-28 19:21:44 +0000 UTC Type:0 Mac:52:54:00:84:6b:cd Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:default-k8s-diff-port-640552 Clientid:01:52:54:00:84:6b:cd}
	I0828 18:22:07.183619   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | domain default-k8s-diff-port-640552 has defined IP address 192.168.39.226 and MAC address 52:54:00:84:6b:cd in network mk-default-k8s-diff-port-640552
	I0828 18:22:07.183784   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHPort
	I0828 18:22:07.183935   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHKeyPath
	I0828 18:22:07.184064   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .GetSSHUsername
	I0828 18:22:07.184197   76486 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/default-k8s-diff-port-640552/id_rsa Username:docker}
	I0828 18:22:07.320359   76486 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:07.338447   76486 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:07.422788   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:07.478274   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:07.478295   76486 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:07.481718   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:07.539263   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:07.539287   76486 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:07.610393   76486 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:07.610415   76486 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:07.671875   76486 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:08.436371   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436397   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436468   76486 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.013643707s)
	I0828 18:22:08.436507   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436520   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436690   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436708   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436720   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436728   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436823   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.436836   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436848   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.436857   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.436866   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.436939   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.436952   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.437124   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) DBG | Closing plugin on server side
	I0828 18:22:08.437174   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.437198   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.442852   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.442871   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.443116   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.443135   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601340   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601386   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601681   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.601728   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.601743   76486 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:08.601753   76486 main.go:141] libmachine: (default-k8s-diff-port-640552) Calling .Close
	I0828 18:22:08.601998   76486 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:08.602020   76486 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:08.602030   76486 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-640552"
	I0828 18:22:08.603833   76486 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:06.787073   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:06.787468   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | unable to find current IP address of domain old-k8s-version-131737 in network mk-old-k8s-version-131737
	I0828 18:22:06.787506   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | I0828 18:22:06.787418   78246 retry.go:31] will retry after 3.844761666s: waiting for machine to come up
	I0828 18:22:08.605028   76486 addons.go:510] duration metric: took 1.488006928s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:09.342263   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:11.990693   75908 start.go:364] duration metric: took 52.869802321s to acquireMachinesLock for "no-preload-072854"
	I0828 18:22:11.990749   75908 start.go:96] Skipping create...Using existing machine configuration
	I0828 18:22:11.990756   75908 fix.go:54] fixHost starting: 
	I0828 18:22:11.991173   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:11.991211   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:12.008247   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33817
	I0828 18:22:12.008729   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:12.009170   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:12.009193   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:12.009534   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:12.009732   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:12.009873   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:12.011416   75908 fix.go:112] recreateIfNeeded on no-preload-072854: state=Stopped err=<nil>
	I0828 18:22:12.011442   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	W0828 18:22:12.011599   75908 fix.go:138] unexpected machine state, will restart: <nil>
	I0828 18:22:12.013401   75908 out.go:177] * Restarting existing kvm2 VM for "no-preload-072854" ...
	I0828 18:22:08.747994   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:11.243666   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:13.245991   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:10.635599   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.635992   77396 main.go:141] libmachine: (old-k8s-version-131737) Found IP for machine: 192.168.50.99
	I0828 18:22:10.636017   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserving static IP address...
	I0828 18:22:10.636035   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has current primary IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.636476   77396 main.go:141] libmachine: (old-k8s-version-131737) Reserved static IP address: 192.168.50.99
	I0828 18:22:10.636507   77396 main.go:141] libmachine: (old-k8s-version-131737) Waiting for SSH to be available...
	I0828 18:22:10.636529   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.636550   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | skip adding static IP to network mk-old-k8s-version-131737 - found existing host DHCP lease matching {name: "old-k8s-version-131737", mac: "52:54:00:21:f1:8b", ip: "192.168.50.99"}
	I0828 18:22:10.636565   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Getting to WaitForSSH function...
	I0828 18:22:10.638762   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639118   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.639150   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.639274   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH client type: external
	I0828 18:22:10.639295   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa (-rw-------)
	I0828 18:22:10.639324   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.99 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:10.639340   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | About to run SSH command:
	I0828 18:22:10.639368   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | exit 0
	I0828 18:22:10.765932   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:10.766339   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetConfigRaw
	I0828 18:22:10.767003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:10.769525   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770006   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.770045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.770184   77396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/config.json ...
	I0828 18:22:10.770396   77396 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:10.770418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:10.770671   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.772685   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773010   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.773031   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.773182   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.773396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773583   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.773739   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.773904   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.774112   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.774125   77396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:10.874115   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:10.874150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874366   77396 buildroot.go:166] provisioning hostname "old-k8s-version-131737"
	I0828 18:22:10.874396   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:10.874600   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:10.876804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877106   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:10.877132   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:10.877237   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:10.877445   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877604   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:10.877763   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:10.877921   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:10.878123   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:10.878139   77396 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-131737 && echo "old-k8s-version-131737" | sudo tee /etc/hostname
	I0828 18:22:10.999107   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-131737
	
	I0828 18:22:10.999144   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.002327   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.002771   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.002802   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.003036   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.003221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003425   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.003610   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.003769   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.003968   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.003986   77396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-131737' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-131737/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-131737' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:11.119461   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:11.119493   77396 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:11.119523   77396 buildroot.go:174] setting up certificates
	I0828 18:22:11.119535   77396 provision.go:84] configureAuth start
	I0828 18:22:11.119547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetMachineName
	I0828 18:22:11.119813   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.122564   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.122916   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.122945   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.123121   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.125575   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.125946   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.125973   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.126103   77396 provision.go:143] copyHostCerts
	I0828 18:22:11.126169   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:11.126192   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:11.126258   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:11.126390   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:11.126416   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:11.126453   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:11.126551   77396 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:11.126565   77396 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:11.126596   77396 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:11.126678   77396 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-131737 san=[127.0.0.1 192.168.50.99 localhost minikube old-k8s-version-131737]
	I0828 18:22:11.382096   77396 provision.go:177] copyRemoteCerts
	I0828 18:22:11.382161   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:11.382189   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.384698   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385045   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.385071   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.385221   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.385394   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.385527   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.385669   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.463818   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:11.487677   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0828 18:22:11.510454   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0828 18:22:11.532302   77396 provision.go:87] duration metric: took 412.75597ms to configureAuth
	I0828 18:22:11.532331   77396 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:11.532551   77396 config.go:182] Loaded profile config "old-k8s-version-131737": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0828 18:22:11.532627   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.535284   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535668   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.535700   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.535816   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.536003   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536138   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.536317   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.536444   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.536599   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.536626   77396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:11.757267   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:11.757297   77396 machine.go:96] duration metric: took 986.887935ms to provisionDockerMachine
	I0828 18:22:11.757311   77396 start.go:293] postStartSetup for "old-k8s-version-131737" (driver="kvm2")
	I0828 18:22:11.757325   77396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:11.757341   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.757701   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:11.757761   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.760433   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760764   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.760804   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.760949   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.761117   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.761288   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.761467   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.842091   77396 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:11.846271   77396 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:11.846294   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:11.846357   77396 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:11.846452   77396 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:11.846590   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:11.856373   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:11.879153   77396 start.go:296] duration metric: took 121.830018ms for postStartSetup
	I0828 18:22:11.879193   77396 fix.go:56] duration metric: took 19.592124568s for fixHost
	I0828 18:22:11.879218   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.882110   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882588   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.882638   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.882814   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.883017   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883241   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.883383   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.883540   77396 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:11.883704   77396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.99 22 <nil> <nil>}
	I0828 18:22:11.883715   77396 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:11.990532   77396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869331.947970723
	
	I0828 18:22:11.990563   77396 fix.go:216] guest clock: 1724869331.947970723
	I0828 18:22:11.990574   77396 fix.go:229] Guest: 2024-08-28 18:22:11.947970723 +0000 UTC Remote: 2024-08-28 18:22:11.879198847 +0000 UTC m=+206.714077766 (delta=68.771876ms)
	I0828 18:22:11.990599   77396 fix.go:200] guest clock delta is within tolerance: 68.771876ms
	I0828 18:22:11.990605   77396 start.go:83] releasing machines lock for "old-k8s-version-131737", held for 19.703582254s
	I0828 18:22:11.990648   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.990935   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:11.993283   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993690   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.993725   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.993908   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994418   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994630   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .DriverName
	I0828 18:22:11.994718   77396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:11.994768   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.994836   77396 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:11.994864   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHHostname
	I0828 18:22:11.997521   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997693   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.997952   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.997974   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998001   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:11.998022   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:11.998150   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998251   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHPort
	I0828 18:22:11.998384   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998466   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHKeyPath
	I0828 18:22:11.998547   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998650   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetSSHUsername
	I0828 18:22:11.998665   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:11.998813   77396 sshutil.go:53] new ssh client: &{IP:192.168.50.99 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/old-k8s-version-131737/id_rsa Username:docker}
	I0828 18:22:12.079201   77396 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:12.116862   77396 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:12.268437   77396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:12.274689   77396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:12.274768   77396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:12.299532   77396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:12.299561   77396 start.go:495] detecting cgroup driver to use...
	I0828 18:22:12.299633   77396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:12.321322   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:12.336273   77396 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:12.336345   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:12.350625   77396 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:12.364155   77396 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:12.475639   77396 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:12.636052   77396 docker.go:233] disabling docker service ...
	I0828 18:22:12.636144   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:12.655431   77396 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:12.673744   77396 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:12.865232   77396 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:12.993530   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:13.006666   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:13.023529   77396 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0828 18:22:13.023617   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.032944   77396 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:13.033014   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.042494   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.052172   77396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:13.062869   77396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:13.073254   77396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:13.081968   77396 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:13.082032   77396 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:13.096163   77396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:13.106942   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:13.229752   77396 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:13.333809   77396 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:13.333870   77396 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:13.339539   77396 start.go:563] Will wait 60s for crictl version
	I0828 18:22:13.339615   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:13.343618   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:13.387552   77396 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:13.387647   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.417440   77396 ssh_runner.go:195] Run: crio --version
	I0828 18:22:13.451222   77396 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0828 18:22:13.452432   77396 main.go:141] libmachine: (old-k8s-version-131737) Calling .GetIP
	I0828 18:22:13.455750   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456127   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:f1:8b", ip: ""} in network mk-old-k8s-version-131737: {Iface:virbr4 ExpiryTime:2024-08-28 19:22:03 +0000 UTC Type:0 Mac:52:54:00:21:f1:8b Iaid: IPaddr:192.168.50.99 Prefix:24 Hostname:old-k8s-version-131737 Clientid:01:52:54:00:21:f1:8b}
	I0828 18:22:13.456158   77396 main.go:141] libmachine: (old-k8s-version-131737) DBG | domain old-k8s-version-131737 has defined IP address 192.168.50.99 and MAC address 52:54:00:21:f1:8b in network mk-old-k8s-version-131737
	I0828 18:22:13.456465   77396 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:13.460719   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:13.474168   77396 kubeadm.go:883] updating cluster {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:13.474315   77396 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 18:22:13.474381   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:13.519869   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:13.519940   77396 ssh_runner.go:195] Run: which lz4
	I0828 18:22:13.524479   77396 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0828 18:22:13.528475   77396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0828 18:22:13.528511   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0828 18:22:15.039582   77396 crio.go:462] duration metric: took 1.515144029s to copy over tarball
	I0828 18:22:15.039666   77396 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0828 18:22:11.342592   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:13.343159   76486 node_ready.go:53] node "default-k8s-diff-port-640552" has status "Ready":"False"
	I0828 18:22:14.844412   76486 node_ready.go:49] node "default-k8s-diff-port-640552" has status "Ready":"True"
	I0828 18:22:14.844443   76486 node_ready.go:38] duration metric: took 7.505958149s for node "default-k8s-diff-port-640552" to be "Ready" ...
	I0828 18:22:14.844457   76486 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:14.852970   76486 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858426   76486 pod_ready.go:93] pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:14.858454   76486 pod_ready.go:82] duration metric: took 5.455024ms for pod "coredns-6f6b679f8f-t5lx6" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:14.858467   76486 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:12.014690   75908 main.go:141] libmachine: (no-preload-072854) Calling .Start
	I0828 18:22:12.014870   75908 main.go:141] libmachine: (no-preload-072854) Ensuring networks are active...
	I0828 18:22:12.015716   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network default is active
	I0828 18:22:12.016229   75908 main.go:141] libmachine: (no-preload-072854) Ensuring network mk-no-preload-072854 is active
	I0828 18:22:12.016663   75908 main.go:141] libmachine: (no-preload-072854) Getting domain xml...
	I0828 18:22:12.017534   75908 main.go:141] libmachine: (no-preload-072854) Creating domain...
	I0828 18:22:13.381018   75908 main.go:141] libmachine: (no-preload-072854) Waiting to get IP...
	I0828 18:22:13.381905   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.382463   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.382515   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.382439   78447 retry.go:31] will retry after 308.332294ms: waiting for machine to come up
	I0828 18:22:13.692047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:13.692496   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:13.692537   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:13.692434   78447 retry.go:31] will retry after 374.325088ms: waiting for machine to come up
	I0828 18:22:14.068154   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.068770   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.068799   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.068736   78447 retry.go:31] will retry after 465.939187ms: waiting for machine to come up
	I0828 18:22:14.536497   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.537032   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.537055   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.536989   78447 retry.go:31] will retry after 374.795357ms: waiting for machine to come up
	I0828 18:22:14.913413   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:14.914015   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:14.914047   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:14.913964   78447 retry.go:31] will retry after 726.118647ms: waiting for machine to come up
	I0828 18:22:15.641971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:15.642532   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:15.642559   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:15.642483   78447 retry.go:31] will retry after 951.90632ms: waiting for machine to come up
	I0828 18:22:15.745367   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.244292   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:18.094470   77396 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.054779864s)
	I0828 18:22:18.094500   77396 crio.go:469] duration metric: took 3.054883651s to extract the tarball
	I0828 18:22:18.094507   77396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0828 18:22:18.138235   77396 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:18.172461   77396 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0828 18:22:18.172484   77396 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:18.172527   77396 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.172572   77396 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.172589   77396 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.172646   77396 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0828 18:22:18.172819   77396 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.172608   77396 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.172823   77396 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.172990   77396 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174545   77396 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:18.174579   77396 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.174598   77396 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0828 18:22:18.174609   77396 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.174551   77396 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.174904   77396 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.415540   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0828 18:22:18.461528   77396 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0828 18:22:18.461577   77396 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0828 18:22:18.461617   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.466065   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.471602   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.476041   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.480111   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.484307   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.500185   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.519236   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.538341   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.614022   77396 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0828 18:22:18.614068   77396 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.614150   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649875   77396 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0828 18:22:18.649927   77396 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.649945   77396 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0828 18:22:18.649976   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.649980   77396 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.650035   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.665128   77396 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0828 18:22:18.665173   77396 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.665225   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686246   77396 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0828 18:22:18.686288   77396 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.686303   77396 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0828 18:22:18.686336   77396 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.686375   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686417   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0828 18:22:18.686339   77396 ssh_runner.go:195] Run: which crictl
	I0828 18:22:18.686483   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.686527   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.686558   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.686599   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775824   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.775875   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.803911   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:18.803983   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0828 18:22:18.822129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:18.822230   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:18.822232   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:18.912309   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0828 18:22:18.912514   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:18.912662   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003129   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0828 18:22:19.003169   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0828 18:22:19.003183   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0828 18:22:19.003201   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0828 18:22:19.003137   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0828 18:22:19.003292   77396 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0828 18:22:19.108957   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0828 18:22:19.109000   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0828 18:22:19.109047   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0828 18:22:19.108961   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0828 18:22:19.109144   77396 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0828 18:22:19.340554   77396 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:19.486655   77396 cache_images.go:92] duration metric: took 1.314154463s to LoadCachedImages
	W0828 18:22:19.486742   77396 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0828 18:22:19.486760   77396 kubeadm.go:934] updating node { 192.168.50.99 8443 v1.20.0 crio true true} ...
	I0828 18:22:19.486898   77396 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-131737 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:19.486979   77396 ssh_runner.go:195] Run: crio config
	I0828 18:22:19.530549   77396 cni.go:84] Creating CNI manager for ""
	I0828 18:22:19.530579   77396 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:19.530592   77396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:19.530621   77396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.99 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-131737 NodeName:old-k8s-version-131737 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0828 18:22:19.530797   77396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-131737"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.99
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.99"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:19.530870   77396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0828 18:22:19.545081   77396 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:19.545179   77396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:19.558002   77396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0828 18:22:19.577056   77396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:19.595848   77396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0828 18:22:19.614164   77396 ssh_runner.go:195] Run: grep 192.168.50.99	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:19.618274   77396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.99	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:19.631776   77396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:19.775809   77396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:19.793491   77396 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737 for IP: 192.168.50.99
	I0828 18:22:19.793521   77396 certs.go:194] generating shared ca certs ...
	I0828 18:22:19.793544   77396 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:19.793722   77396 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:19.793776   77396 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:19.793788   77396 certs.go:256] generating profile certs ...
	I0828 18:22:19.793928   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/client.key
	I0828 18:22:19.793993   77396 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key.131f8aa0
	I0828 18:22:19.794043   77396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key
	I0828 18:22:19.794211   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:19.794279   77396 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:19.794292   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:19.794322   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:19.794353   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:19.794379   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:19.794447   77396 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:19.795621   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:19.831614   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:19.874281   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:19.927912   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:19.967892   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0828 18:22:20.010378   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 18:22:20.036730   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:20.064707   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/old-k8s-version-131737/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 18:22:20.089246   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:20.116913   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:20.151729   77396 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:20.174509   77396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:20.190911   77396 ssh_runner.go:195] Run: openssl version
	I0828 18:22:16.865253   76486 pod_ready.go:103] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:17.867833   76486 pod_ready.go:93] pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.867859   76486 pod_ready.go:82] duration metric: took 3.009384484s for pod "etcd-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.867869   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.875975   76486 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:17.876008   76486 pod_ready.go:82] duration metric: took 8.131826ms for pod "kube-apiserver-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:17.876022   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883334   76486 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.883363   76486 pod_ready.go:82] duration metric: took 1.007332551s for pod "kube-controller-manager-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.883377   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890003   76486 pod_ready.go:93] pod "kube-proxy-lmpft" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.890032   76486 pod_ready.go:82] duration metric: took 6.647273ms for pod "kube-proxy-lmpft" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.890045   76486 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895629   76486 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace has status "Ready":"True"
	I0828 18:22:18.895658   76486 pod_ready.go:82] duration metric: took 5.60504ms for pod "kube-scheduler-default-k8s-diff-port-640552" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:18.895672   76486 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:16.595708   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:16.596190   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:16.596219   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:16.596152   78447 retry.go:31] will retry after 1.127921402s: waiting for machine to come up
	I0828 18:22:17.725174   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:17.725707   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:17.725736   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:17.725653   78447 retry.go:31] will retry after 959.892711ms: waiting for machine to come up
	I0828 18:22:18.686818   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:18.687269   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:18.687291   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:18.687225   78447 retry.go:31] will retry after 1.541922737s: waiting for machine to come up
	I0828 18:22:20.231099   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:20.231669   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:20.231697   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:20.231621   78447 retry.go:31] will retry after 1.601924339s: waiting for machine to come up
	I0828 18:22:20.743848   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:22.745091   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:20.198369   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:20.208787   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213735   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.213798   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:20.219855   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:20.230970   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:20.243428   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248105   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.248169   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:20.253803   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:20.264495   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:20.275530   77396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280118   77396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.280179   77396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:20.286135   77396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:20.296995   77396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:20.302843   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:20.309214   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:20.314977   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:20.321177   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:20.327689   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:20.334176   77396 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:20.340478   77396 kubeadm.go:392] StartCluster: {Name:old-k8s-version-131737 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-131737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:20.340589   77396 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:20.340666   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.377288   77396 cri.go:89] found id: ""
	I0828 18:22:20.377366   77396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:20.387774   77396 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:20.387796   77396 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:20.387846   77396 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:20.398086   77396 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:20.399369   77396 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-131737" does not appear in /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:20.400118   77396 kubeconfig.go:62] /home/jenkins/minikube-integration/19529-10317/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-131737" cluster setting kubeconfig missing "old-k8s-version-131737" context setting]
	I0828 18:22:20.401248   77396 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:20.464577   77396 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:20.475116   77396 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.99
	I0828 18:22:20.475161   77396 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:20.475172   77396 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:20.475233   77396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:20.509801   77396 cri.go:89] found id: ""
	I0828 18:22:20.509881   77396 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:20.527245   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:20.537526   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:20.537548   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:20.537603   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:20.546096   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:20.546168   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:20.555608   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:20.564344   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:20.564405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:20.573551   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.582191   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:20.582248   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:20.592105   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:20.601563   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:20.601624   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:20.612220   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:20.621113   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:20.738800   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.351223   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.564678   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.659764   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:21.748789   77396 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:21.748886   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.249370   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:22.749578   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.249982   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:23.749304   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.249774   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:24.749363   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:20.928806   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:23.402840   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:21.835332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:21.835849   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:21.835884   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:21.835787   78447 retry.go:31] will retry after 2.437330454s: waiting for machine to come up
	I0828 18:22:24.275082   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:24.275523   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:24.275553   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:24.275493   78447 retry.go:31] will retry after 2.288360059s: waiting for machine to come up
	I0828 18:22:26.564963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:26.565404   75908 main.go:141] libmachine: (no-preload-072854) DBG | unable to find current IP address of domain no-preload-072854 in network mk-no-preload-072854
	I0828 18:22:26.565432   75908 main.go:141] libmachine: (no-preload-072854) DBG | I0828 18:22:26.565358   78447 retry.go:31] will retry after 2.911207221s: waiting for machine to come up
	I0828 18:22:25.243485   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:27.744153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:25.249675   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.749573   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.249942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:26.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.249956   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:27.749065   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.249309   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:28.749697   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.249151   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:29.749206   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:25.902220   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:28.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.402648   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:29.479385   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479953   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has current primary IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.479975   75908 main.go:141] libmachine: (no-preload-072854) Found IP for machine: 192.168.61.138
	I0828 18:22:29.479988   75908 main.go:141] libmachine: (no-preload-072854) Reserving static IP address...
	I0828 18:22:29.480455   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.480476   75908 main.go:141] libmachine: (no-preload-072854) Reserved static IP address: 192.168.61.138
	I0828 18:22:29.480490   75908 main.go:141] libmachine: (no-preload-072854) DBG | skip adding static IP to network mk-no-preload-072854 - found existing host DHCP lease matching {name: "no-preload-072854", mac: "52:54:00:56:8e:fa", ip: "192.168.61.138"}
	I0828 18:22:29.480500   75908 main.go:141] libmachine: (no-preload-072854) DBG | Getting to WaitForSSH function...
	I0828 18:22:29.480509   75908 main.go:141] libmachine: (no-preload-072854) Waiting for SSH to be available...
	I0828 18:22:29.483163   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483478   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.483509   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.483617   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH client type: external
	I0828 18:22:29.483636   75908 main.go:141] libmachine: (no-preload-072854) DBG | Using SSH private key: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa (-rw-------)
	I0828 18:22:29.483673   75908 main.go:141] libmachine: (no-preload-072854) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0828 18:22:29.483691   75908 main.go:141] libmachine: (no-preload-072854) DBG | About to run SSH command:
	I0828 18:22:29.483705   75908 main.go:141] libmachine: (no-preload-072854) DBG | exit 0
	I0828 18:22:29.606048   75908 main.go:141] libmachine: (no-preload-072854) DBG | SSH cmd err, output: <nil>: 
	I0828 18:22:29.606410   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetConfigRaw
	I0828 18:22:29.607071   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.609374   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609733   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.609763   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.609984   75908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/config.json ...
	I0828 18:22:29.610223   75908 machine.go:93] provisionDockerMachine start ...
	I0828 18:22:29.610245   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:29.610451   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.612963   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613409   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.613431   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.613494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.613688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.613988   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.614165   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.614339   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.614355   75908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 18:22:29.714325   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0828 18:22:29.714360   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714596   75908 buildroot.go:166] provisioning hostname "no-preload-072854"
	I0828 18:22:29.714621   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.714829   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.717545   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.717914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.717939   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.718102   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.718312   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718513   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.718676   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.718848   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.719009   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.719026   75908 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-072854 && echo "no-preload-072854" | sudo tee /etc/hostname
	I0828 18:22:29.835992   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-072854
	
	I0828 18:22:29.836024   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.839134   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839621   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.839654   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.839909   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:29.840128   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840324   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:29.840540   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:29.840742   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:29.840973   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:29.841005   75908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-072854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-072854/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-072854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 18:22:29.951089   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 18:22:29.951125   75908 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19529-10317/.minikube CaCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19529-10317/.minikube}
	I0828 18:22:29.951149   75908 buildroot.go:174] setting up certificates
	I0828 18:22:29.951162   75908 provision.go:84] configureAuth start
	I0828 18:22:29.951178   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetMachineName
	I0828 18:22:29.951496   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:29.954309   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954663   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.954694   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.954817   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:29.957076   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957345   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:29.957365   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:29.957550   75908 provision.go:143] copyHostCerts
	I0828 18:22:29.957606   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem, removing ...
	I0828 18:22:29.957624   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem
	I0828 18:22:29.957683   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/ca.pem (1078 bytes)
	I0828 18:22:29.957792   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem, removing ...
	I0828 18:22:29.957807   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem
	I0828 18:22:29.957831   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/cert.pem (1123 bytes)
	I0828 18:22:29.957913   75908 exec_runner.go:144] found /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem, removing ...
	I0828 18:22:29.957924   75908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem
	I0828 18:22:29.957951   75908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19529-10317/.minikube/key.pem (1679 bytes)
	I0828 18:22:29.958060   75908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem org=jenkins.no-preload-072854 san=[127.0.0.1 192.168.61.138 localhost minikube no-preload-072854]
	I0828 18:22:30.038643   75908 provision.go:177] copyRemoteCerts
	I0828 18:22:30.038705   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 18:22:30.038730   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.041574   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.041914   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.041946   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.042125   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.042306   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.042460   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.042618   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.124224   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 18:22:30.148835   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0828 18:22:30.171599   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 18:22:30.195349   75908 provision.go:87] duration metric: took 244.171371ms to configureAuth
	I0828 18:22:30.195375   75908 buildroot.go:189] setting minikube options for container-runtime
	I0828 18:22:30.195580   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:30.195665   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.198535   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.198938   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.198961   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.199171   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.199349   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199490   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.199727   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.199917   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.200104   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.200125   75908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0828 18:22:30.422282   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0828 18:22:30.422314   75908 machine.go:96] duration metric: took 812.07707ms to provisionDockerMachine
	I0828 18:22:30.422328   75908 start.go:293] postStartSetup for "no-preload-072854" (driver="kvm2")
	I0828 18:22:30.422341   75908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 18:22:30.422361   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.422658   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 18:22:30.422688   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.425627   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426006   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.426047   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.426199   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.426401   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.426539   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.426675   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.508399   75908 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 18:22:30.512395   75908 info.go:137] Remote host: Buildroot 2023.02.9
	I0828 18:22:30.512418   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/addons for local assets ...
	I0828 18:22:30.512505   75908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19529-10317/.minikube/files for local assets ...
	I0828 18:22:30.512603   75908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem -> 175282.pem in /etc/ssl/certs
	I0828 18:22:30.512723   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0828 18:22:30.522105   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:30.545166   75908 start.go:296] duration metric: took 122.822966ms for postStartSetup
	I0828 18:22:30.545203   75908 fix.go:56] duration metric: took 18.554447914s for fixHost
	I0828 18:22:30.545221   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.548255   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548658   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.548683   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.548867   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.549078   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549251   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.549378   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.549555   75908 main.go:141] libmachine: Using SSH client type: native
	I0828 18:22:30.549774   75908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.138 22 <nil> <nil>}
	I0828 18:22:30.549788   75908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0828 18:22:30.650663   75908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724869350.622150588
	
	I0828 18:22:30.650688   75908 fix.go:216] guest clock: 1724869350.622150588
	I0828 18:22:30.650699   75908 fix.go:229] Guest: 2024-08-28 18:22:30.622150588 +0000 UTC Remote: 2024-08-28 18:22:30.545207555 +0000 UTC m=+354.015941485 (delta=76.943033ms)
	I0828 18:22:30.650723   75908 fix.go:200] guest clock delta is within tolerance: 76.943033ms
	I0828 18:22:30.650741   75908 start.go:83] releasing machines lock for "no-preload-072854", held for 18.660017717s
	I0828 18:22:30.650770   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.651011   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:30.653715   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654110   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.654150   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.654274   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.654882   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655093   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:30.655173   75908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 18:22:30.655235   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.655319   75908 ssh_runner.go:195] Run: cat /version.json
	I0828 18:22:30.655339   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:30.658052   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658097   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658440   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658470   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658507   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:30.658520   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:30.658677   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658804   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:30.658899   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659098   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:30.659131   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659272   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:30.659276   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.659426   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:30.769716   75908 ssh_runner.go:195] Run: systemctl --version
	I0828 18:22:30.775522   75908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0828 18:22:30.918471   75908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0828 18:22:30.924338   75908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0828 18:22:30.924416   75908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 18:22:30.939462   75908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0828 18:22:30.939489   75908 start.go:495] detecting cgroup driver to use...
	I0828 18:22:30.939589   75908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0828 18:22:30.956324   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0828 18:22:30.970243   75908 docker.go:217] disabling cri-docker service (if available) ...
	I0828 18:22:30.970319   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0828 18:22:30.983636   75908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0828 18:22:30.996989   75908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0828 18:22:31.116994   75908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0828 18:22:31.290216   75908 docker.go:233] disabling docker service ...
	I0828 18:22:31.290291   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0828 18:22:31.305578   75908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0828 18:22:31.318402   75908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0828 18:22:31.446431   75908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0828 18:22:31.570180   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0828 18:22:31.583862   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 18:22:31.602513   75908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0828 18:22:31.602577   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.613726   75908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0828 18:22:31.613798   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.627405   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.638648   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.648905   75908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 18:22:31.660365   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.670925   75908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.689052   75908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0828 18:22:31.699345   75908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 18:22:31.708691   75908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0828 18:22:31.708753   75908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0828 18:22:31.721500   75908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 18:22:31.730798   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:31.858773   75908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0828 18:22:31.945345   75908 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0828 18:22:31.945419   75908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0828 18:22:31.949720   75908 start.go:563] Will wait 60s for crictl version
	I0828 18:22:31.949784   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:31.953193   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 18:22:31.990360   75908 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0828 18:22:31.990440   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.019756   75908 ssh_runner.go:195] Run: crio --version
	I0828 18:22:32.048117   75908 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0828 18:22:29.744207   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.243511   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:30.249883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:30.749652   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.249973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:31.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.249415   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.749545   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.249768   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:33.749104   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.249819   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:34.749727   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:32.901907   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:34.907432   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:32.049494   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetIP
	I0828 18:22:32.052227   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052548   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:32.052585   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:32.052800   75908 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0828 18:22:32.056788   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:32.068700   75908 kubeadm.go:883] updating cluster {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 18:22:32.068814   75908 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 18:22:32.068847   75908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0828 18:22:32.103085   75908 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0828 18:22:32.103111   75908 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0828 18:22:32.103153   75908 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.103194   75908 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.103240   75908 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.103260   75908 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.103331   75908 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.103379   75908 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.103433   75908 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.103242   75908 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104775   75908 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.104806   75908 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.104829   75908 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.104777   75908 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0828 18:22:32.104776   75908 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.104781   75908 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.343173   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0828 18:22:32.343209   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.409616   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.418908   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.447831   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.453065   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.453813   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.494045   75908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0828 18:22:32.494090   75908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0828 18:22:32.494121   75908 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.494122   75908 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.494157   75908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0828 18:22:32.494168   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494169   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.494179   75908 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.494209   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546592   75908 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0828 18:22:32.546634   75908 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.546655   75908 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0828 18:22:32.546682   75908 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.546698   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546724   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546807   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.546829   75908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0828 18:22:32.546849   75908 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.546880   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.546891   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:32.546910   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.557550   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.593306   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.593328   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.648848   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.648913   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.648922   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.648973   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.704513   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.717712   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0828 18:22:32.779954   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0828 18:22:32.780015   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0828 18:22:32.780080   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.780148   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0828 18:22:32.814614   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0828 18:22:32.821580   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0828 18:22:32.821660   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.901464   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0828 18:22:32.901584   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:32.905004   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0828 18:22:32.905036   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0828 18:22:32.905102   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:32.905103   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0828 18:22:32.905144   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0828 18:22:32.905160   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905190   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0828 18:22:32.905105   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:32.905191   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:32.905205   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0828 18:22:32.907869   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0828 18:22:33.324215   75908 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292175   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (2.386961854s)
	I0828 18:22:35.292205   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0828 18:22:35.292234   75908 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292245   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.387114296s)
	I0828 18:22:35.292273   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0828 18:22:35.292301   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0828 18:22:35.292314   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0: (2.386985678s)
	I0828 18:22:35.292354   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0828 18:22:35.292358   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.387036145s)
	I0828 18:22:35.292367   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0: (2.387143897s)
	I0828 18:22:35.292375   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0828 18:22:35.292385   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0828 18:22:35.292409   75908 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.968164241s)
	I0828 18:22:35.292446   75908 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0828 18:22:35.292456   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:35.292479   75908 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:35.292536   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:22:34.243832   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:36.744323   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:35.249587   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:35.749826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.249647   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:36.749792   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.249845   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.249577   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:38.749412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.249047   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:39.749564   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:37.402943   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:39.901715   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:37.064442   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.772111922s)
	I0828 18:22:37.064476   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0828 18:22:37.064498   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.064500   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.772021571s)
	I0828 18:22:37.064529   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0828 18:22:37.064536   75908 ssh_runner.go:235] Completed: which crictl: (1.771982077s)
	I0828 18:22:37.064603   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:37.064550   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0828 18:22:37.121169   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933342   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (1.868675318s)
	I0828 18:22:38.933379   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0828 18:22:38.933390   75908 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.812184072s)
	I0828 18:22:38.933486   75908 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:38.933400   75908 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.933543   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0828 18:22:38.983461   75908 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0828 18:22:38.983579   75908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:39.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:41.243732   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:40.249307   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:40.749120   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.249107   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.749895   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.249941   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:42.748952   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.249788   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:43.749898   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.249654   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:44.749350   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:41.903470   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:44.403257   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:42.534353   75908 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.550744503s)
	I0828 18:22:42.534392   75908 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0828 18:22:42.534430   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.600866705s)
	I0828 18:22:42.534448   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0828 18:22:42.534472   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:42.534521   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0828 18:22:44.602703   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.068154029s)
	I0828 18:22:44.602738   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0828 18:22:44.602765   75908 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:44.602809   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0828 18:22:45.948751   75908 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.345914789s)
	I0828 18:22:45.948794   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0828 18:22:45.948821   75908 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:45.948874   75908 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0828 18:22:43.742979   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.743892   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:47.745070   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:45.249353   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:45.749091   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.249897   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.748991   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.249385   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:47.749204   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.248962   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:48.749853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.249574   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.749028   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:46.403322   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:48.902485   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:46.594343   75908 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19529-10317/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0828 18:22:46.594405   75908 cache_images.go:123] Successfully loaded all cached images
	I0828 18:22:46.594413   75908 cache_images.go:92] duration metric: took 14.491290737s to LoadCachedImages
	I0828 18:22:46.594428   75908 kubeadm.go:934] updating node { 192.168.61.138 8443 v1.31.0 crio true true} ...
	I0828 18:22:46.594562   75908 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-072854 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 18:22:46.594627   75908 ssh_runner.go:195] Run: crio config
	I0828 18:22:46.641210   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:46.641230   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:46.641240   75908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 18:22:46.641260   75908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.138 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-072854 NodeName:no-preload-072854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 18:22:46.641417   75908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-072854"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 18:22:46.641507   75908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 18:22:46.653042   75908 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 18:22:46.653110   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 18:22:46.671775   75908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0828 18:22:46.691485   75908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 18:22:46.707525   75908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0828 18:22:46.723642   75908 ssh_runner.go:195] Run: grep 192.168.61.138	control-plane.minikube.internal$ /etc/hosts
	I0828 18:22:46.727148   75908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 18:22:46.738598   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:46.877354   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:46.896287   75908 certs.go:68] Setting up /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854 for IP: 192.168.61.138
	I0828 18:22:46.896309   75908 certs.go:194] generating shared ca certs ...
	I0828 18:22:46.896324   75908 certs.go:226] acquiring lock for ca certs: {Name:mkc0e22a5e4d8db098e67aefc1e015c59c483faf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:46.896488   75908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key
	I0828 18:22:46.896543   75908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key
	I0828 18:22:46.896578   75908 certs.go:256] generating profile certs ...
	I0828 18:22:46.896694   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/client.key
	I0828 18:22:46.896777   75908 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key.f9122682
	I0828 18:22:46.896833   75908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key
	I0828 18:22:46.896945   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem (1338 bytes)
	W0828 18:22:46.896975   75908 certs.go:480] ignoring /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528_empty.pem, impossibly tiny 0 bytes
	I0828 18:22:46.896984   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 18:22:46.897006   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/ca.pem (1078 bytes)
	I0828 18:22:46.897028   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/cert.pem (1123 bytes)
	I0828 18:22:46.897050   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/certs/key.pem (1679 bytes)
	I0828 18:22:46.897086   75908 certs.go:484] found cert: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem (1708 bytes)
	I0828 18:22:46.897777   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 18:22:46.940603   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 18:22:46.971255   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 18:22:47.009269   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 18:22:47.043849   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0828 18:22:47.081562   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0828 18:22:47.104248   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 18:22:47.127680   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/no-preload-072854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0828 18:22:47.150718   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 18:22:47.171449   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/certs/17528.pem --> /usr/share/ca-certificates/17528.pem (1338 bytes)
	I0828 18:22:47.192814   75908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/ssl/certs/175282.pem --> /usr/share/ca-certificates/175282.pem (1708 bytes)
	I0828 18:22:47.213607   75908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 18:22:47.229589   75908 ssh_runner.go:195] Run: openssl version
	I0828 18:22:47.235107   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17528.pem && ln -fs /usr/share/ca-certificates/17528.pem /etc/ssl/certs/17528.pem"
	I0828 18:22:47.245976   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250512   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 28 17:10 /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.250568   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17528.pem
	I0828 18:22:47.256305   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/17528.pem /etc/ssl/certs/51391683.0"
	I0828 18:22:47.267080   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/175282.pem && ln -fs /usr/share/ca-certificates/175282.pem /etc/ssl/certs/175282.pem"
	I0828 18:22:47.276961   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281311   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 28 17:10 /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.281388   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/175282.pem
	I0828 18:22:47.286823   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/175282.pem /etc/ssl/certs/3ec20f2e.0"
	I0828 18:22:47.298010   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 18:22:47.309303   75908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313555   75908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:52 /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.313604   75908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 18:22:47.319146   75908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 18:22:47.329851   75908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 18:22:47.333891   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0828 18:22:47.339544   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0828 18:22:47.344883   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0828 18:22:47.350419   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0828 18:22:47.355560   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0828 18:22:47.360987   75908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0828 18:22:47.366392   75908 kubeadm.go:392] StartCluster: {Name:no-preload-072854 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-072854 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 18:22:47.366472   75908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0828 18:22:47.366518   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.407218   75908 cri.go:89] found id: ""
	I0828 18:22:47.407283   75908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 18:22:47.418518   75908 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0828 18:22:47.418541   75908 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0828 18:22:47.418599   75908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0828 18:22:47.429592   75908 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0828 18:22:47.430649   75908 kubeconfig.go:125] found "no-preload-072854" server: "https://192.168.61.138:8443"
	I0828 18:22:47.432727   75908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0828 18:22:47.443042   75908 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.138
	I0828 18:22:47.443072   75908 kubeadm.go:1160] stopping kube-system containers ...
	I0828 18:22:47.443084   75908 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0828 18:22:47.443132   75908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0828 18:22:47.483840   75908 cri.go:89] found id: ""
	I0828 18:22:47.483906   75908 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0828 18:22:47.499558   75908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:22:47.508932   75908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:22:47.508954   75908 kubeadm.go:157] found existing configuration files:
	
	I0828 18:22:47.508998   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:22:47.519003   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:22:47.519082   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:22:47.528248   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:22:47.536682   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:22:47.536744   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:22:47.545411   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.553945   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:22:47.554005   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:22:47.562837   75908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:22:47.571080   75908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:22:47.571141   75908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:22:47.579788   75908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:22:47.590221   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:47.707814   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.459935   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.669459   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.772934   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:48.886910   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:22:48.887010   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.387963   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.887167   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:49.923097   75908 api_server.go:72] duration metric: took 1.036200671s to wait for apiserver process to appear ...
	I0828 18:22:49.923147   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:22:49.923182   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:50.244153   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.245033   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:52.835389   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0828 18:22:52.835424   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0828 18:22:52.835439   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.938497   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.938528   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:52.938541   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:52.943233   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:52.943256   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.423531   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.428654   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.428675   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:53.924251   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:53.963729   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0828 18:22:53.963759   75908 api_server.go:103] status: https://192.168.61.138:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0828 18:22:54.423241   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:22:54.430345   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:22:54.436835   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:22:54.436858   75908 api_server.go:131] duration metric: took 4.513702157s to wait for apiserver health ...
	I0828 18:22:54.436867   75908 cni.go:84] Creating CNI manager for ""
	I0828 18:22:54.436873   75908 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:22:54.438482   75908 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:22:50.249726   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:50.749045   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.249609   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.749060   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.249827   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:52.748985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.248958   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:53.748960   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.249581   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:54.749175   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:51.404355   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:53.904030   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:54.439656   75908 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:22:54.453060   75908 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:22:54.473537   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:22:54.489302   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:22:54.489340   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0828 18:22:54.489352   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0828 18:22:54.489369   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0828 18:22:54.489380   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0828 18:22:54.489392   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0828 18:22:54.489404   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0828 18:22:54.489414   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:22:54.489425   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0828 18:22:54.489434   75908 system_pods.go:74] duration metric: took 15.875803ms to wait for pod list to return data ...
	I0828 18:22:54.489446   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:22:54.494398   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:22:54.494428   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:22:54.494441   75908 node_conditions.go:105] duration metric: took 4.987547ms to run NodePressure ...
	I0828 18:22:54.494462   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0828 18:22:54.766427   75908 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771542   75908 kubeadm.go:739] kubelet initialised
	I0828 18:22:54.771571   75908 kubeadm.go:740] duration metric: took 5.116897ms waiting for restarted kubelet to initialise ...
	I0828 18:22:54.771582   75908 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:54.777783   75908 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.787163   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787193   75908 pod_ready.go:82] duration metric: took 9.382038ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.787205   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.787215   75908 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.791786   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791810   75908 pod_ready.go:82] duration metric: took 4.586002ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.791818   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "etcd-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.791826   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.796201   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796220   75908 pod_ready.go:82] duration metric: took 4.388906ms for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.796228   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-apiserver-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.796234   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:54.877071   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877104   75908 pod_ready.go:82] duration metric: took 80.86176ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:54.877118   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:54.877127   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.277179   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277206   75908 pod_ready.go:82] duration metric: took 400.069901ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.277215   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-proxy-tfxfd" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.277223   75908 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:55.676857   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676887   75908 pod_ready.go:82] duration metric: took 399.658558ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:55.676898   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "kube-scheduler-no-preload-072854" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:55.676904   75908 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:22:56.077491   75908 pod_ready.go:98] node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077525   75908 pod_ready.go:82] duration metric: took 400.610612ms for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:22:56.077535   75908 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-072854" hosting pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:56.077543   75908 pod_ready.go:39] duration metric: took 1.305948645s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:22:56.077559   75908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:22:56.090851   75908 ops.go:34] apiserver oom_adj: -16
	I0828 18:22:56.090878   75908 kubeadm.go:597] duration metric: took 8.672328864s to restartPrimaryControlPlane
	I0828 18:22:56.090889   75908 kubeadm.go:394] duration metric: took 8.724501209s to StartCluster
	I0828 18:22:56.090909   75908 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.090980   75908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:22:56.092859   75908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:22:56.093177   75908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.138 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:22:56.093304   75908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:22:56.093391   75908 addons.go:69] Setting storage-provisioner=true in profile "no-preload-072854"
	I0828 18:22:56.093386   75908 config.go:182] Loaded profile config "no-preload-072854": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:22:56.093415   75908 addons.go:69] Setting default-storageclass=true in profile "no-preload-072854"
	I0828 18:22:56.093472   75908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-072854"
	I0828 18:22:56.093457   75908 addons.go:69] Setting metrics-server=true in profile "no-preload-072854"
	I0828 18:22:56.093501   75908 addons.go:234] Setting addon metrics-server=true in "no-preload-072854"
	I0828 18:22:56.093429   75908 addons.go:234] Setting addon storage-provisioner=true in "no-preload-072854"
	W0828 18:22:56.093516   75908 addons.go:243] addon metrics-server should already be in state true
	W0828 18:22:56.093518   75908 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093548   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.093869   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093904   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.093994   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.093969   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.094069   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.094796   75908 out.go:177] * Verifying Kubernetes components...
	I0828 18:22:56.096268   75908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:22:56.110476   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43867
	I0828 18:22:56.110685   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35765
	I0828 18:22:56.110791   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0828 18:22:56.111030   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111183   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111453   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.111592   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111603   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111710   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111720   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111820   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.111839   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.111892   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112043   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112214   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.112402   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.112440   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112474   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.112669   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.112711   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.115984   75908 addons.go:234] Setting addon default-storageclass=true in "no-preload-072854"
	W0828 18:22:56.116000   75908 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:22:56.116020   75908 host.go:66] Checking if "no-preload-072854" exists ...
	I0828 18:22:56.116245   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.116280   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.127848   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35747
	I0828 18:22:56.134902   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.135863   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.135892   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.136351   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.136536   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.138800   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.140837   75908 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:22:56.142271   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:22:56.142290   75908 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:22:56.142311   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.145770   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146271   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.146332   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.146572   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.146787   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.146958   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.147097   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.158402   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I0828 18:22:56.158948   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.159531   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.159555   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.159622   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38379
	I0828 18:22:56.160033   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.160108   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.160578   75908 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:22:56.160608   75908 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:22:56.160864   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.160876   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.161318   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.161543   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.163449   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.165347   75908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:22:56.166532   75908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.166547   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:22:56.166564   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.170058   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170510   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.170536   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.170718   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.170900   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.171055   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.171193   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.177056   75908 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39279
	I0828 18:22:56.177458   75908 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:22:56.177969   75908 main.go:141] libmachine: Using API Version  1
	I0828 18:22:56.178001   75908 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:22:56.178335   75908 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:22:56.178537   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetState
	I0828 18:22:56.180056   75908 main.go:141] libmachine: (no-preload-072854) Calling .DriverName
	I0828 18:22:56.180261   75908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.180274   75908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:22:56.180288   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHHostname
	I0828 18:22:56.182971   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183550   75908 main.go:141] libmachine: (no-preload-072854) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:8e:fa", ip: ""} in network mk-no-preload-072854: {Iface:virbr3 ExpiryTime:2024-08-28 19:22:23 +0000 UTC Type:0 Mac:52:54:00:56:8e:fa Iaid: IPaddr:192.168.61.138 Prefix:24 Hostname:no-preload-072854 Clientid:01:52:54:00:56:8e:fa}
	I0828 18:22:56.183576   75908 main.go:141] libmachine: (no-preload-072854) DBG | domain no-preload-072854 has defined IP address 192.168.61.138 and MAC address 52:54:00:56:8e:fa in network mk-no-preload-072854
	I0828 18:22:56.183726   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHPort
	I0828 18:22:56.183879   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHKeyPath
	I0828 18:22:56.184042   75908 main.go:141] libmachine: (no-preload-072854) Calling .GetSSHUsername
	I0828 18:22:56.184212   75908 sshutil.go:53] new ssh client: &{IP:192.168.61.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/no-preload-072854/id_rsa Username:docker}
	I0828 18:22:56.333329   75908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:22:56.363605   75908 node_ready.go:35] waiting up to 6m0s for node "no-preload-072854" to be "Ready" ...
	I0828 18:22:56.444569   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:22:56.444591   75908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:22:56.466266   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:22:56.466288   75908 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:22:56.472695   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:22:56.494468   75908 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:56.494496   75908 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:22:56.499713   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:22:56.549699   75908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:22:57.391629   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391655   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.391634   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.391724   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392046   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392063   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392072   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392068   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392080   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392108   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392046   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.392127   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.392144   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.392152   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.392322   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.392336   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.393780   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.393802   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.393846   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.397916   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.397937   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.398164   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.398183   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.398202   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520056   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520082   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520358   75908 main.go:141] libmachine: (no-preload-072854) DBG | Closing plugin on server side
	I0828 18:22:57.520373   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520392   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520435   75908 main.go:141] libmachine: Making call to close driver server
	I0828 18:22:57.520458   75908 main.go:141] libmachine: (no-preload-072854) Calling .Close
	I0828 18:22:57.520699   75908 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:22:57.520714   75908 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:22:57.520725   75908 addons.go:475] Verifying addon metrics-server=true in "no-preload-072854"
	I0828 18:22:57.522537   75908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0828 18:22:54.742708   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:56.744595   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:55.248933   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:55.749502   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.249976   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.749648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.249544   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:57.749769   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.249492   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:58.749787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.249693   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:59.749781   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:22:56.402039   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:58.901738   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:22:57.523745   75908 addons.go:510] duration metric: took 1.430442724s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0828 18:22:58.367342   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:00.867911   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:22:59.243496   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:01.244209   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:00.249249   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.749724   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.248973   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:01.748932   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.249474   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:02.749966   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.249404   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:03.749805   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.248943   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:04.749828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:00.902675   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:03.402001   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:02.868286   75908 node_ready.go:53] node "no-preload-072854" has status "Ready":"False"
	I0828 18:23:03.367260   75908 node_ready.go:49] node "no-preload-072854" has status "Ready":"True"
	I0828 18:23:03.367286   75908 node_ready.go:38] duration metric: took 7.003649083s for node "no-preload-072854" to be "Ready" ...
	I0828 18:23:03.367296   75908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:23:03.372211   75908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376919   75908 pod_ready.go:93] pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.376944   75908 pod_ready.go:82] duration metric: took 4.710919ms for pod "coredns-6f6b679f8f-fjclq" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.376954   75908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381043   75908 pod_ready.go:93] pod "etcd-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:03.381066   75908 pod_ready.go:82] duration metric: took 4.10571ms for pod "etcd-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.381078   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:05.388413   75908 pod_ready.go:103] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.387040   75908 pod_ready.go:93] pod "kube-apiserver-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.387060   75908 pod_ready.go:82] duration metric: took 3.005974723s for pod "kube-apiserver-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.387070   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391257   75908 pod_ready.go:93] pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.391276   75908 pod_ready.go:82] duration metric: took 4.19923ms for pod "kube-controller-manager-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.391285   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396819   75908 pod_ready.go:93] pod "kube-proxy-tfxfd" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.396836   75908 pod_ready.go:82] duration metric: took 5.545346ms for pod "kube-proxy-tfxfd" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.396845   75908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:03.743752   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.242657   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.243781   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:05.249882   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.749888   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.249648   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:06.749518   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.249032   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:07.749910   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.249738   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:08.749748   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.249670   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:09.749246   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:05.906344   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:08.401488   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.402915   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:06.568922   75908 pod_ready.go:93] pod "kube-scheduler-no-preload-072854" in "kube-system" namespace has status "Ready":"True"
	I0828 18:23:06.568948   75908 pod_ready.go:82] duration metric: took 172.096644ms for pod "kube-scheduler-no-preload-072854" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:06.568964   75908 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	I0828 18:23:08.574813   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.576583   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.743641   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.243152   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:10.249340   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:10.749798   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.249721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:11.749337   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.249779   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.249760   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:13.749029   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.249441   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:14.749641   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:12.903188   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.401514   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:13.076559   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.575593   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.742772   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.743273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:15.249678   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:15.749552   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.249786   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:16.748968   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.249139   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.749721   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.249749   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:18.749731   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.249576   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:19.749644   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:17.402418   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.902446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:17.575692   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.576073   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:19.744432   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.243417   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:20.249682   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:20.748965   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.249378   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:21.749011   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:21.749077   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:21.783557   77396 cri.go:89] found id: ""
	I0828 18:23:21.783581   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.783592   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:21.783600   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:21.783667   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:21.816332   77396 cri.go:89] found id: ""
	I0828 18:23:21.816366   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.816377   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:21.816385   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:21.816451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:21.850130   77396 cri.go:89] found id: ""
	I0828 18:23:21.850157   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.850168   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:21.850175   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:21.850240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:21.887000   77396 cri.go:89] found id: ""
	I0828 18:23:21.887028   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.887037   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:21.887045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:21.887106   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:21.922052   77396 cri.go:89] found id: ""
	I0828 18:23:21.922095   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.922106   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:21.922114   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:21.922169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:21.968838   77396 cri.go:89] found id: ""
	I0828 18:23:21.968865   77396 logs.go:276] 0 containers: []
	W0828 18:23:21.968872   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:21.968879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:21.968937   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:22.005361   77396 cri.go:89] found id: ""
	I0828 18:23:22.005387   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.005397   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:22.005404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:22.005465   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:22.043999   77396 cri.go:89] found id: ""
	I0828 18:23:22.044026   77396 logs.go:276] 0 containers: []
	W0828 18:23:22.044034   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:22.044042   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:22.044054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:22.092612   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:22.092641   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:22.105847   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:22.105870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:22.230236   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:22.230254   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:22.230267   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:22.305648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:22.305712   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:24.843524   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:24.856321   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:24.856412   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:24.891356   77396 cri.go:89] found id: ""
	I0828 18:23:24.891395   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.891406   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:24.891414   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:24.891476   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:24.923476   77396 cri.go:89] found id: ""
	I0828 18:23:24.923504   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.923515   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:24.923522   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:24.923583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:24.955453   77396 cri.go:89] found id: ""
	I0828 18:23:24.955482   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.955493   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:24.955499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:24.955564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:24.991349   77396 cri.go:89] found id: ""
	I0828 18:23:24.991377   77396 logs.go:276] 0 containers: []
	W0828 18:23:24.991384   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:24.991394   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:24.991448   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:25.026464   77396 cri.go:89] found id: ""
	I0828 18:23:25.026493   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.026501   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:25.026508   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:25.026559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:25.066989   77396 cri.go:89] found id: ""
	I0828 18:23:25.067021   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.067045   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:25.067053   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:25.067123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:25.111327   77396 cri.go:89] found id: ""
	I0828 18:23:25.111358   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.111369   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:25.111377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:25.111442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:25.159672   77396 cri.go:89] found id: ""
	I0828 18:23:25.159698   77396 logs.go:276] 0 containers: []
	W0828 18:23:25.159707   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:25.159715   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:25.159726   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:21.902745   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.402292   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:22.075480   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.575344   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:24.743311   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.743442   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:25.216755   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:25.216788   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:25.230365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:25.230399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:25.303227   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:25.303253   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:25.303276   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:25.378467   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:25.378501   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:27.915420   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:27.927659   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:27.927726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:27.961535   77396 cri.go:89] found id: ""
	I0828 18:23:27.961560   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.961568   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:27.961573   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:27.961618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:27.993707   77396 cri.go:89] found id: ""
	I0828 18:23:27.993732   77396 logs.go:276] 0 containers: []
	W0828 18:23:27.993739   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:27.993745   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:27.993792   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:28.027410   77396 cri.go:89] found id: ""
	I0828 18:23:28.027438   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.027445   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:28.027451   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:28.027509   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:28.063874   77396 cri.go:89] found id: ""
	I0828 18:23:28.063909   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.063918   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:28.063924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:28.063974   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:28.096726   77396 cri.go:89] found id: ""
	I0828 18:23:28.096755   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.096763   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:28.096769   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:28.096826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:28.129538   77396 cri.go:89] found id: ""
	I0828 18:23:28.129562   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.129570   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:28.129576   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:28.129633   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:28.167785   77396 cri.go:89] found id: ""
	I0828 18:23:28.167813   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.167821   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:28.167827   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:28.167881   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:28.200417   77396 cri.go:89] found id: ""
	I0828 18:23:28.200445   77396 logs.go:276] 0 containers: []
	W0828 18:23:28.200456   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:28.200467   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:28.200481   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:28.214025   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:28.214054   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:28.280106   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:28.280126   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:28.280139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:28.359834   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:28.359875   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:28.399997   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:28.400028   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:26.902287   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.403446   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:26.576035   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:29.075134   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.080674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:28.744552   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:31.243346   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.243825   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:30.950870   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:30.967367   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:30.967426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:31.007843   77396 cri.go:89] found id: ""
	I0828 18:23:31.007873   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.007882   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:31.007890   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:31.007949   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:31.056710   77396 cri.go:89] found id: ""
	I0828 18:23:31.056744   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.056756   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:31.056764   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:31.056824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:31.101177   77396 cri.go:89] found id: ""
	I0828 18:23:31.101208   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.101218   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:31.101225   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:31.101283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:31.135513   77396 cri.go:89] found id: ""
	I0828 18:23:31.135548   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.135560   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:31.135568   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:31.135635   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:31.172887   77396 cri.go:89] found id: ""
	I0828 18:23:31.172921   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.172932   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:31.172939   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:31.173006   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:31.207744   77396 cri.go:89] found id: ""
	I0828 18:23:31.207775   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.207788   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:31.207795   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:31.207873   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:31.242954   77396 cri.go:89] found id: ""
	I0828 18:23:31.242984   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.242995   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:31.243003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:31.243063   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:31.277382   77396 cri.go:89] found id: ""
	I0828 18:23:31.277418   77396 logs.go:276] 0 containers: []
	W0828 18:23:31.277427   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:31.277436   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:31.277448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.315688   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:31.315722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:31.367565   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:31.367596   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:31.380803   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:31.380839   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:31.447184   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:31.447214   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:31.447229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.022521   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:34.036551   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:34.036615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:34.074735   77396 cri.go:89] found id: ""
	I0828 18:23:34.074763   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.074772   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:34.074780   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:34.074836   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:34.113604   77396 cri.go:89] found id: ""
	I0828 18:23:34.113631   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.113642   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:34.113649   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:34.113711   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:34.152658   77396 cri.go:89] found id: ""
	I0828 18:23:34.152687   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.152701   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:34.152707   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:34.152753   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:34.188748   77396 cri.go:89] found id: ""
	I0828 18:23:34.188775   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.188784   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:34.188789   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:34.188847   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:34.221553   77396 cri.go:89] found id: ""
	I0828 18:23:34.221584   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.221595   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:34.221602   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:34.221666   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:34.257809   77396 cri.go:89] found id: ""
	I0828 18:23:34.257833   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.257843   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:34.257850   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:34.257935   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:34.291217   77396 cri.go:89] found id: ""
	I0828 18:23:34.291246   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.291253   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:34.291261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:34.291327   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:34.324084   77396 cri.go:89] found id: ""
	I0828 18:23:34.324114   77396 logs.go:276] 0 containers: []
	W0828 18:23:34.324122   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:34.324133   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:34.324147   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:34.373802   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:34.373838   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:34.386779   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:34.386807   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:34.457396   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:34.457413   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:34.457428   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:34.531549   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:34.531590   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:31.901633   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:34.402475   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:33.576038   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:36.075226   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:35.743297   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.744669   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:37.068985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:37.083317   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:37.083383   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:37.117109   77396 cri.go:89] found id: ""
	I0828 18:23:37.117144   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.117156   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:37.117164   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:37.117225   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:37.150151   77396 cri.go:89] found id: ""
	I0828 18:23:37.150180   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.150189   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:37.150194   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:37.150249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:37.184263   77396 cri.go:89] found id: ""
	I0828 18:23:37.184289   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.184298   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:37.184303   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:37.184358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:37.214442   77396 cri.go:89] found id: ""
	I0828 18:23:37.214468   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.214476   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:37.214481   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:37.214545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:37.251690   77396 cri.go:89] found id: ""
	I0828 18:23:37.251723   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.251732   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:37.251738   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:37.251790   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:37.286900   77396 cri.go:89] found id: ""
	I0828 18:23:37.286929   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.286939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:37.286946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:37.287026   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:37.324010   77396 cri.go:89] found id: ""
	I0828 18:23:37.324039   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.324049   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:37.324057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:37.324114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:37.359723   77396 cri.go:89] found id: ""
	I0828 18:23:37.359777   77396 logs.go:276] 0 containers: []
	W0828 18:23:37.359785   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:37.359813   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:37.359829   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:37.411363   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:37.411395   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:37.425078   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:37.425108   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:37.498351   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:37.498374   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:37.498399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:37.580149   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:37.580187   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:40.119822   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:40.134555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:40.134613   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:40.173129   77396 cri.go:89] found id: ""
	I0828 18:23:40.173156   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.173164   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:40.173170   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:40.173218   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:36.902004   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:39.401256   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:38.575639   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.575835   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.243909   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.743492   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:40.205445   77396 cri.go:89] found id: ""
	I0828 18:23:40.205470   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.205477   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:40.205482   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:40.205536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:40.237018   77396 cri.go:89] found id: ""
	I0828 18:23:40.237046   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.237057   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:40.237064   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:40.237124   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:40.271188   77396 cri.go:89] found id: ""
	I0828 18:23:40.271220   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.271232   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:40.271239   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:40.271302   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:40.304532   77396 cri.go:89] found id: ""
	I0828 18:23:40.304566   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.304577   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:40.304585   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:40.304652   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:40.338114   77396 cri.go:89] found id: ""
	I0828 18:23:40.338145   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.338156   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:40.338165   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:40.338227   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:40.370126   77396 cri.go:89] found id: ""
	I0828 18:23:40.370160   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.370176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:40.370184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:40.370247   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:40.406139   77396 cri.go:89] found id: ""
	I0828 18:23:40.406167   77396 logs.go:276] 0 containers: []
	W0828 18:23:40.406176   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:40.406186   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:40.406201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:40.459364   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:40.459404   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:40.472467   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:40.472496   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:40.546389   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:40.546420   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:40.546438   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:40.628550   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:40.628586   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:43.170210   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:43.183441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:43.183516   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:43.215798   77396 cri.go:89] found id: ""
	I0828 18:23:43.215823   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.215834   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:43.215841   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:43.215905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:43.250001   77396 cri.go:89] found id: ""
	I0828 18:23:43.250027   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.250035   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:43.250041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:43.250110   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:43.284621   77396 cri.go:89] found id: ""
	I0828 18:23:43.284654   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.284662   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:43.284668   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:43.284716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:43.318780   77396 cri.go:89] found id: ""
	I0828 18:23:43.318805   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.318815   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:43.318821   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:43.318866   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:43.351788   77396 cri.go:89] found id: ""
	I0828 18:23:43.351810   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.351818   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:43.351823   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:43.351872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:43.388719   77396 cri.go:89] found id: ""
	I0828 18:23:43.388745   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.388755   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:43.388761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:43.388810   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:43.423250   77396 cri.go:89] found id: ""
	I0828 18:23:43.423273   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.423283   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:43.423290   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:43.423376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:43.464644   77396 cri.go:89] found id: ""
	I0828 18:23:43.464672   77396 logs.go:276] 0 containers: []
	W0828 18:23:43.464683   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:43.464693   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:43.464708   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:43.517422   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:43.517457   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:43.530317   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:43.530342   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:43.599776   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:43.599795   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:43.599806   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:43.679377   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:43.679409   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:41.401619   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:43.403142   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:42.576264   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.076333   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:45.242626   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.243310   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:46.215985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:46.229564   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:46.229632   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:46.267425   77396 cri.go:89] found id: ""
	I0828 18:23:46.267453   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.267464   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:46.267472   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:46.267534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:46.302532   77396 cri.go:89] found id: ""
	I0828 18:23:46.302562   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.302573   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:46.302580   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:46.302645   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:46.338197   77396 cri.go:89] found id: ""
	I0828 18:23:46.338226   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.338237   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:46.338244   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:46.338305   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:46.371503   77396 cri.go:89] found id: ""
	I0828 18:23:46.371528   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.371535   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:46.371542   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:46.371606   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:46.406364   77396 cri.go:89] found id: ""
	I0828 18:23:46.406386   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.406399   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:46.406405   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:46.406451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:46.441519   77396 cri.go:89] found id: ""
	I0828 18:23:46.441547   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.441557   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:46.441565   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:46.441626   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:46.475413   77396 cri.go:89] found id: ""
	I0828 18:23:46.475445   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.475455   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:46.475465   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:46.475531   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:46.508722   77396 cri.go:89] found id: ""
	I0828 18:23:46.508752   77396 logs.go:276] 0 containers: []
	W0828 18:23:46.508762   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:46.508772   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:46.508790   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:46.564737   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:46.564776   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:46.578833   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:46.578860   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:46.649533   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:46.649554   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:46.649566   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:46.725738   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:46.725780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.263052   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:49.275342   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:49.275403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:49.310092   77396 cri.go:89] found id: ""
	I0828 18:23:49.310121   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.310131   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:49.310138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:49.310200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:49.347624   77396 cri.go:89] found id: ""
	I0828 18:23:49.347649   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.347657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:49.347662   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:49.347708   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:49.383801   77396 cri.go:89] found id: ""
	I0828 18:23:49.383827   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.383834   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:49.383840   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:49.383889   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:49.420443   77396 cri.go:89] found id: ""
	I0828 18:23:49.420470   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.420478   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:49.420484   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:49.420536   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:49.452225   77396 cri.go:89] found id: ""
	I0828 18:23:49.452247   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.452255   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:49.452260   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:49.452306   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:49.486137   77396 cri.go:89] found id: ""
	I0828 18:23:49.486164   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.486172   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:49.486178   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:49.486224   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:49.519081   77396 cri.go:89] found id: ""
	I0828 18:23:49.519115   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.519126   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:49.519137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:49.519199   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:49.552903   77396 cri.go:89] found id: ""
	I0828 18:23:49.552932   77396 logs.go:276] 0 containers: []
	W0828 18:23:49.552940   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:49.552948   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:49.552962   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:49.623963   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:49.624000   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:49.624023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:49.700684   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:49.700722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:49.738241   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:49.738265   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:49.786941   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:49.786976   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:45.901814   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.903106   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.905017   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:47.575690   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.576689   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:49.243535   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:51.243843   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:53.244097   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.300380   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:52.314281   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:52.314347   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:52.348497   77396 cri.go:89] found id: ""
	I0828 18:23:52.348522   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.348532   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:52.348539   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:52.348605   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:52.382060   77396 cri.go:89] found id: ""
	I0828 18:23:52.382107   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.382119   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:52.382127   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:52.382242   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:52.414306   77396 cri.go:89] found id: ""
	I0828 18:23:52.414335   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.414348   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:52.414356   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:52.414424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:52.448965   77396 cri.go:89] found id: ""
	I0828 18:23:52.448995   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.449005   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:52.449012   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:52.449079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:52.479102   77396 cri.go:89] found id: ""
	I0828 18:23:52.479129   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.479140   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:52.479148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:52.479213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:52.510025   77396 cri.go:89] found id: ""
	I0828 18:23:52.510051   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.510061   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:52.510068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:52.510171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:52.544472   77396 cri.go:89] found id: ""
	I0828 18:23:52.544501   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.544510   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:52.544517   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:52.544584   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:52.579962   77396 cri.go:89] found id: ""
	I0828 18:23:52.579986   77396 logs.go:276] 0 containers: []
	W0828 18:23:52.579993   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:52.580000   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:52.580015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:52.631775   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:52.631809   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:52.645200   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:52.645230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:52.709318   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:52.709341   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:52.709355   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:52.788797   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:52.788834   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:52.402059   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.901750   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:52.075625   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:54.076533   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.743325   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.242726   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:55.324787   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:55.338003   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:55.338109   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:55.371733   77396 cri.go:89] found id: ""
	I0828 18:23:55.371757   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.371764   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:55.371770   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:55.371818   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:55.407922   77396 cri.go:89] found id: ""
	I0828 18:23:55.407944   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.407951   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:55.407957   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:55.408009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:55.443667   77396 cri.go:89] found id: ""
	I0828 18:23:55.443693   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.443700   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:55.443706   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:55.443761   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:55.478692   77396 cri.go:89] found id: ""
	I0828 18:23:55.478725   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.478735   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:55.478742   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:55.478804   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:55.512495   77396 cri.go:89] found id: ""
	I0828 18:23:55.512517   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.512525   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:55.512530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:55.512583   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:55.546363   77396 cri.go:89] found id: ""
	I0828 18:23:55.546404   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.546415   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:55.546423   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:55.546478   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:55.579505   77396 cri.go:89] found id: ""
	I0828 18:23:55.579526   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.579533   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:55.579539   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:55.579588   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:55.610588   77396 cri.go:89] found id: ""
	I0828 18:23:55.610612   77396 logs.go:276] 0 containers: []
	W0828 18:23:55.610628   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:55.610648   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:55.610659   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:55.647289   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:55.647313   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:55.696660   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:55.696699   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:55.709215   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:55.709242   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:55.781755   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:55.781773   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:55.781786   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.359553   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:23:58.371960   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:23:58.372034   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:23:58.404455   77396 cri.go:89] found id: ""
	I0828 18:23:58.404481   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.404488   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:23:58.404494   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:23:58.404545   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:23:58.436955   77396 cri.go:89] found id: ""
	I0828 18:23:58.436979   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.436989   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:23:58.436996   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:23:58.437055   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:23:58.467985   77396 cri.go:89] found id: ""
	I0828 18:23:58.468011   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.468021   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:23:58.468028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:23:58.468085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:23:58.500356   77396 cri.go:89] found id: ""
	I0828 18:23:58.500390   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.500398   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:23:58.500404   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:23:58.500469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:23:58.538445   77396 cri.go:89] found id: ""
	I0828 18:23:58.538469   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.538477   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:23:58.538483   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:23:58.538541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:23:58.577827   77396 cri.go:89] found id: ""
	I0828 18:23:58.577851   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.577859   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:23:58.577867   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:23:58.577932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:23:58.611863   77396 cri.go:89] found id: ""
	I0828 18:23:58.611891   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.611902   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:23:58.611909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:23:58.611973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:23:58.646133   77396 cri.go:89] found id: ""
	I0828 18:23:58.646165   77396 logs.go:276] 0 containers: []
	W0828 18:23:58.646175   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:23:58.646187   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:23:58.646204   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:23:58.659103   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:23:58.659134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:23:58.725271   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:23:58.725292   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:23:58.725310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:23:58.807171   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:23:58.807218   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:23:58.848245   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:23:58.848273   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:23:56.902329   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.902824   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:56.575727   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:23:58.576160   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.075851   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:00.243273   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:02.247987   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:01.402171   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:01.415498   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:01.415574   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:01.449314   77396 cri.go:89] found id: ""
	I0828 18:24:01.449347   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.449355   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:01.449362   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:01.449425   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:01.485354   77396 cri.go:89] found id: ""
	I0828 18:24:01.485381   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.485388   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:01.485395   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:01.485439   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:01.518106   77396 cri.go:89] found id: ""
	I0828 18:24:01.518132   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.518139   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:01.518145   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:01.518191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:01.551298   77396 cri.go:89] found id: ""
	I0828 18:24:01.551329   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.551340   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:01.551348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:01.551406   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:01.587074   77396 cri.go:89] found id: ""
	I0828 18:24:01.587100   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.587107   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:01.587112   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:01.587158   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:01.619482   77396 cri.go:89] found id: ""
	I0828 18:24:01.619510   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.619518   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:01.619523   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:01.619575   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:01.651938   77396 cri.go:89] found id: ""
	I0828 18:24:01.651965   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.651972   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:01.651978   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:01.652039   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:01.685390   77396 cri.go:89] found id: ""
	I0828 18:24:01.685419   77396 logs.go:276] 0 containers: []
	W0828 18:24:01.685429   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:01.685437   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:01.685448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.723631   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:01.723656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:01.777387   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:01.777422   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:01.793748   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:01.793781   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:01.857869   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:01.857901   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:01.857915   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.434883   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:04.447876   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:04.447953   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:04.480730   77396 cri.go:89] found id: ""
	I0828 18:24:04.480762   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.480774   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:04.480781   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:04.480841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:04.514621   77396 cri.go:89] found id: ""
	I0828 18:24:04.514647   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.514657   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:04.514664   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:04.514722   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:04.552044   77396 cri.go:89] found id: ""
	I0828 18:24:04.552071   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.552083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:04.552090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:04.552151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:04.587402   77396 cri.go:89] found id: ""
	I0828 18:24:04.587427   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.587440   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:04.587446   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:04.587506   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:04.619299   77396 cri.go:89] found id: ""
	I0828 18:24:04.619329   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.619337   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:04.619343   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:04.619393   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:04.659363   77396 cri.go:89] found id: ""
	I0828 18:24:04.659391   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.659399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:04.659408   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:04.659469   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:04.691997   77396 cri.go:89] found id: ""
	I0828 18:24:04.692022   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.692030   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:04.692035   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:04.692089   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:04.725162   77396 cri.go:89] found id: ""
	I0828 18:24:04.725188   77396 logs.go:276] 0 containers: []
	W0828 18:24:04.725196   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:04.725204   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:04.725215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:04.778072   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:04.778112   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:04.792571   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:04.792604   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:04.863074   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:04.863096   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:04.863107   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:04.958480   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:04.958516   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:01.401445   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.402916   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:03.575667   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:05.576444   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:04.744216   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.243680   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.498048   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:07.511286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:07.511350   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:07.554880   77396 cri.go:89] found id: ""
	I0828 18:24:07.554910   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.554921   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:07.554929   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:07.554990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:07.590593   77396 cri.go:89] found id: ""
	I0828 18:24:07.590621   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.590631   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:07.590641   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:07.590706   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:07.624067   77396 cri.go:89] found id: ""
	I0828 18:24:07.624096   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.624107   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:07.624113   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:07.624169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:07.657241   77396 cri.go:89] found id: ""
	I0828 18:24:07.657269   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.657277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:07.657282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:07.657341   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:07.702308   77396 cri.go:89] found id: ""
	I0828 18:24:07.702358   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.702368   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:07.702375   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:07.702438   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:07.736409   77396 cri.go:89] found id: ""
	I0828 18:24:07.736446   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.736454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:07.736459   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:07.736527   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:07.771001   77396 cri.go:89] found id: ""
	I0828 18:24:07.771029   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.771037   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:07.771043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:07.771090   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:07.807061   77396 cri.go:89] found id: ""
	I0828 18:24:07.807089   77396 logs.go:276] 0 containers: []
	W0828 18:24:07.807099   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:07.807111   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:07.807125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:07.885254   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:07.885293   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:07.926920   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:07.926948   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:07.980485   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:07.980524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:07.994512   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:07.994545   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:08.071058   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:05.901817   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.902547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.402041   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:07.576656   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.077246   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:09.244155   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:11.743283   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:10.571233   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:10.586227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:10.586298   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:10.623971   77396 cri.go:89] found id: ""
	I0828 18:24:10.623997   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.624006   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:10.624014   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:10.624074   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:10.675472   77396 cri.go:89] found id: ""
	I0828 18:24:10.675506   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.675518   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:10.675526   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:10.675599   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:10.707885   77396 cri.go:89] found id: ""
	I0828 18:24:10.707913   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.707922   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:10.707931   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:10.707991   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:10.740896   77396 cri.go:89] found id: ""
	I0828 18:24:10.740924   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.740934   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:10.740942   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:10.741058   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:10.776125   77396 cri.go:89] found id: ""
	I0828 18:24:10.776155   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.776167   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:10.776174   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:10.776234   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:10.814024   77396 cri.go:89] found id: ""
	I0828 18:24:10.814053   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.814062   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:10.814068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:10.814132   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:10.851380   77396 cri.go:89] found id: ""
	I0828 18:24:10.851404   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.851412   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:10.851418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:10.851479   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:10.888162   77396 cri.go:89] found id: ""
	I0828 18:24:10.888193   77396 logs.go:276] 0 containers: []
	W0828 18:24:10.888204   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:10.888215   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:10.888229   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:10.938481   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:10.938520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:10.952841   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:10.952870   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:11.020956   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:11.020982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:11.020997   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:11.101883   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:11.101920   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:13.642878   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:13.657098   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:13.657172   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:13.695651   77396 cri.go:89] found id: ""
	I0828 18:24:13.695686   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.695694   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:13.695699   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:13.695747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:13.732419   77396 cri.go:89] found id: ""
	I0828 18:24:13.732452   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.732465   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:13.732473   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:13.732523   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:13.770052   77396 cri.go:89] found id: ""
	I0828 18:24:13.770090   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.770099   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:13.770104   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:13.770157   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:13.807955   77396 cri.go:89] found id: ""
	I0828 18:24:13.807980   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.807988   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:13.807993   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:13.808045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:13.849535   77396 cri.go:89] found id: ""
	I0828 18:24:13.849559   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.849566   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:13.849571   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:13.849621   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:13.889078   77396 cri.go:89] found id: ""
	I0828 18:24:13.889105   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.889114   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:13.889122   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:13.889177   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:13.924998   77396 cri.go:89] found id: ""
	I0828 18:24:13.925030   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.925040   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:13.925046   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:13.925095   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:13.962794   77396 cri.go:89] found id: ""
	I0828 18:24:13.962824   77396 logs.go:276] 0 containers: []
	W0828 18:24:13.962835   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:13.962843   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:13.962854   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:14.016213   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:14.016260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:14.030089   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:14.030119   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:14.101102   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:14.101121   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:14.101134   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:14.179243   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:14.179283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:12.903671   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:15.401472   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:12.575572   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:14.575994   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:13.743881   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.243453   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:16.725412   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:16.738387   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:16.738459   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:16.773934   77396 cri.go:89] found id: ""
	I0828 18:24:16.773960   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.773967   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:16.773973   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:16.774022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:16.807374   77396 cri.go:89] found id: ""
	I0828 18:24:16.807402   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.807412   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:16.807418   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:16.807468   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:16.841569   77396 cri.go:89] found id: ""
	I0828 18:24:16.841595   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.841605   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:16.841613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:16.841673   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:16.877225   77396 cri.go:89] found id: ""
	I0828 18:24:16.877247   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.877255   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:16.877261   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:16.877321   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:16.911357   77396 cri.go:89] found id: ""
	I0828 18:24:16.911385   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.911395   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:16.911402   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:16.911458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:16.955061   77396 cri.go:89] found id: ""
	I0828 18:24:16.955087   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.955095   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:16.955103   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:16.955156   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:16.989851   77396 cri.go:89] found id: ""
	I0828 18:24:16.989887   77396 logs.go:276] 0 containers: []
	W0828 18:24:16.989900   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:16.989906   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:16.989966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:17.023974   77396 cri.go:89] found id: ""
	I0828 18:24:17.024005   77396 logs.go:276] 0 containers: []
	W0828 18:24:17.024016   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:17.024024   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:17.024036   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:17.085245   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:17.085279   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:17.100181   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:17.100211   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:17.185406   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:17.185426   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:17.185437   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:17.266980   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:17.267020   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:19.808568   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:19.823365   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:19.823432   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:19.859428   77396 cri.go:89] found id: ""
	I0828 18:24:19.859451   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.859459   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:19.859464   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:19.859518   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:19.895152   77396 cri.go:89] found id: ""
	I0828 18:24:19.895176   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.895186   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:19.895202   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:19.895263   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:19.935775   77396 cri.go:89] found id: ""
	I0828 18:24:19.935806   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.935815   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:19.935828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:19.935893   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:19.969484   77396 cri.go:89] found id: ""
	I0828 18:24:19.969518   77396 logs.go:276] 0 containers: []
	W0828 18:24:19.969528   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:19.969534   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:19.969615   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:20.002893   77396 cri.go:89] found id: ""
	I0828 18:24:20.002935   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.002947   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:20.002955   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:20.003041   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:20.034641   77396 cri.go:89] found id: ""
	I0828 18:24:20.034668   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.034678   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:20.034686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:20.034750   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:20.064580   77396 cri.go:89] found id: ""
	I0828 18:24:20.064609   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.064620   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:20.064627   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:20.064710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:20.109306   77396 cri.go:89] found id: ""
	I0828 18:24:20.109348   77396 logs.go:276] 0 containers: []
	W0828 18:24:20.109360   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:20.109371   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:20.109390   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:20.160179   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:20.160213   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:20.172953   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:20.172982   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:24:17.402222   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.402389   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:17.076219   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:19.575317   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:18.742920   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:21.243791   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:24:20.245855   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:20.245879   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:20.245894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:20.333372   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:20.333430   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:22.870985   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:22.886333   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:22.886403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:22.923248   77396 cri.go:89] found id: ""
	I0828 18:24:22.923278   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.923290   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:22.923298   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:22.923362   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:22.961720   77396 cri.go:89] found id: ""
	I0828 18:24:22.961747   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.961758   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:22.961767   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:22.961826   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:22.996416   77396 cri.go:89] found id: ""
	I0828 18:24:22.996451   77396 logs.go:276] 0 containers: []
	W0828 18:24:22.996461   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:22.996469   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:22.996534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:23.031328   77396 cri.go:89] found id: ""
	I0828 18:24:23.031354   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.031365   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:23.031373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:23.031442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:23.062790   77396 cri.go:89] found id: ""
	I0828 18:24:23.062818   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.062828   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:23.062836   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:23.062900   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:23.095783   77396 cri.go:89] found id: ""
	I0828 18:24:23.095811   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.095822   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:23.095829   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:23.095887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:23.128950   77396 cri.go:89] found id: ""
	I0828 18:24:23.128976   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.128984   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:23.128989   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:23.129035   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:23.161040   77396 cri.go:89] found id: ""
	I0828 18:24:23.161070   77396 logs.go:276] 0 containers: []
	W0828 18:24:23.161081   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:23.161093   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:23.161109   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:23.209200   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:23.209232   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:23.222326   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:23.222369   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:23.294157   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:23.294223   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:23.294235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:23.371364   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:23.371399   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:21.902165   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.902593   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:22.075187   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:24.076034   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:23.743186   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.245507   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.248023   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:25.911853   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:25.924909   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:25.925042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:25.958257   77396 cri.go:89] found id: ""
	I0828 18:24:25.958286   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.958294   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:25.958300   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:25.958380   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:25.991284   77396 cri.go:89] found id: ""
	I0828 18:24:25.991312   77396 logs.go:276] 0 containers: []
	W0828 18:24:25.991320   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:25.991325   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:25.991373   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:26.023932   77396 cri.go:89] found id: ""
	I0828 18:24:26.023963   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.023974   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:26.023981   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:26.024042   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:26.055233   77396 cri.go:89] found id: ""
	I0828 18:24:26.055264   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.055274   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:26.055282   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:26.055342   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:26.091307   77396 cri.go:89] found id: ""
	I0828 18:24:26.091334   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.091345   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:26.091353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:26.091403   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:26.123887   77396 cri.go:89] found id: ""
	I0828 18:24:26.123919   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.123929   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:26.123943   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:26.124004   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:26.156028   77396 cri.go:89] found id: ""
	I0828 18:24:26.156055   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.156063   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:26.156068   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:26.156129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:26.186952   77396 cri.go:89] found id: ""
	I0828 18:24:26.186981   77396 logs.go:276] 0 containers: []
	W0828 18:24:26.186989   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:26.186998   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:26.187008   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:26.234021   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:26.234065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:26.249052   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:26.249079   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:26.323382   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:26.323406   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:26.323421   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:26.408279   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:26.408306   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:28.950242   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:28.964886   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:28.964973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:28.999657   77396 cri.go:89] found id: ""
	I0828 18:24:28.999686   77396 logs.go:276] 0 containers: []
	W0828 18:24:28.999695   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:28.999701   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:28.999759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:29.036649   77396 cri.go:89] found id: ""
	I0828 18:24:29.036682   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.036691   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:29.036697   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:29.036758   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:29.071048   77396 cri.go:89] found id: ""
	I0828 18:24:29.071073   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.071083   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:29.071090   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:29.071149   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:29.106377   77396 cri.go:89] found id: ""
	I0828 18:24:29.106412   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.106423   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:29.106430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:29.106494   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:29.141150   77396 cri.go:89] found id: ""
	I0828 18:24:29.141183   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.141192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:29.141198   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:29.141261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:29.175977   77396 cri.go:89] found id: ""
	I0828 18:24:29.176007   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.176015   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:29.176022   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:29.176085   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:29.209684   77396 cri.go:89] found id: ""
	I0828 18:24:29.209714   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.209725   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:29.209732   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:29.209791   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:29.244105   77396 cri.go:89] found id: ""
	I0828 18:24:29.244133   77396 logs.go:276] 0 containers: []
	W0828 18:24:29.244143   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:29.244153   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:29.244168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:29.304288   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:29.304326   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:29.319606   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:29.319636   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:29.389101   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:29.389123   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:29.389135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:29.474129   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:29.474168   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:26.401494   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.402117   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.402503   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:26.574724   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:28.575806   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:31.075079   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:30.743295   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.743355   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:32.018867   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:32.032399   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:32.032467   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:32.066994   77396 cri.go:89] found id: ""
	I0828 18:24:32.067023   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.067032   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:32.067038   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:32.067094   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:32.102133   77396 cri.go:89] found id: ""
	I0828 18:24:32.102164   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.102176   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:32.102183   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:32.102237   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:32.136427   77396 cri.go:89] found id: ""
	I0828 18:24:32.136450   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.136457   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:32.136463   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:32.136514   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.169993   77396 cri.go:89] found id: ""
	I0828 18:24:32.170026   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.170034   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:32.170040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:32.170114   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:32.202191   77396 cri.go:89] found id: ""
	I0828 18:24:32.202218   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.202229   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:32.202236   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:32.202297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:32.241866   77396 cri.go:89] found id: ""
	I0828 18:24:32.241890   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.241900   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:32.241908   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:32.241980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:32.275919   77396 cri.go:89] found id: ""
	I0828 18:24:32.275949   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.275965   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:32.275972   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:32.276033   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:32.310958   77396 cri.go:89] found id: ""
	I0828 18:24:32.310991   77396 logs.go:276] 0 containers: []
	W0828 18:24:32.311002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:32.311010   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:32.311023   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:32.367619   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:32.367665   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:32.380676   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:32.380707   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:32.445626   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:32.445650   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:32.445668   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:32.528458   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:32.528493   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:35.070182   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:35.084599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:35.084707   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:35.120542   77396 cri.go:89] found id: ""
	I0828 18:24:35.120568   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.120578   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:35.120585   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:35.120644   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:35.159336   77396 cri.go:89] found id: ""
	I0828 18:24:35.159361   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.159372   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:35.159378   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:35.159445   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:35.197161   77396 cri.go:89] found id: ""
	I0828 18:24:35.197185   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.197196   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:35.197203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:35.197267   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:32.903836   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.401184   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:33.574441   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.574602   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.244147   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.744307   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:35.233507   77396 cri.go:89] found id: ""
	I0828 18:24:35.233533   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.233542   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:35.233548   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:35.233609   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:35.270403   77396 cri.go:89] found id: ""
	I0828 18:24:35.270440   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.270448   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:35.270454   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:35.270503   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:35.304119   77396 cri.go:89] found id: ""
	I0828 18:24:35.304141   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.304149   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:35.304155   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:35.304223   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:35.341477   77396 cri.go:89] found id: ""
	I0828 18:24:35.341507   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.341518   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:35.341525   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:35.341589   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:35.374180   77396 cri.go:89] found id: ""
	I0828 18:24:35.374207   77396 logs.go:276] 0 containers: []
	W0828 18:24:35.374215   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:35.374224   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:35.374235   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:35.428008   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:35.428041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:35.443131   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:35.443159   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:35.515296   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:35.515318   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:35.515332   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:35.590734   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:35.590765   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.129856   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:38.143354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:38.143413   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:38.174964   77396 cri.go:89] found id: ""
	I0828 18:24:38.174993   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.175004   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:38.175011   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:38.175083   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:38.211424   77396 cri.go:89] found id: ""
	I0828 18:24:38.211460   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.211471   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:38.211477   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:38.211533   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:38.244667   77396 cri.go:89] found id: ""
	I0828 18:24:38.244697   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.244712   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:38.244719   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:38.244779   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:38.277930   77396 cri.go:89] found id: ""
	I0828 18:24:38.277955   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.277963   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:38.277969   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:38.278020   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:38.311374   77396 cri.go:89] found id: ""
	I0828 18:24:38.311403   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.311413   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:38.311420   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:38.311477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:38.345467   77396 cri.go:89] found id: ""
	I0828 18:24:38.345496   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.345507   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:38.345515   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:38.345576   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:38.377554   77396 cri.go:89] found id: ""
	I0828 18:24:38.377584   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.377595   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:38.377613   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:38.377675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:38.410101   77396 cri.go:89] found id: ""
	I0828 18:24:38.410132   77396 logs.go:276] 0 containers: []
	W0828 18:24:38.410142   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:38.410151   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:38.410165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:38.422496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:38.422523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:38.486692   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:38.486715   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:38.486728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:38.567295   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:38.567331   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:38.605787   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:38.605820   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:37.402128   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.902663   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:37.574935   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:39.575447   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:40.243971   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.743768   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:41.159454   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:41.172776   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:41.172845   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:41.205430   77396 cri.go:89] found id: ""
	I0828 18:24:41.205459   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.205470   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:41.205477   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:41.205541   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:41.238941   77396 cri.go:89] found id: ""
	I0828 18:24:41.238968   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.238978   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:41.238985   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:41.239047   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:41.276056   77396 cri.go:89] found id: ""
	I0828 18:24:41.276079   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.276086   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:41.276092   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:41.276140   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:41.309018   77396 cri.go:89] found id: ""
	I0828 18:24:41.309043   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.309051   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:41.309057   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:41.309103   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:41.343279   77396 cri.go:89] found id: ""
	I0828 18:24:41.343301   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.343309   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:41.343314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:41.343360   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:41.376723   77396 cri.go:89] found id: ""
	I0828 18:24:41.376749   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.376756   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:41.376762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:41.376811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:41.411996   77396 cri.go:89] found id: ""
	I0828 18:24:41.412023   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.412034   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:41.412040   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:41.412091   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:41.445988   77396 cri.go:89] found id: ""
	I0828 18:24:41.446016   77396 logs.go:276] 0 containers: []
	W0828 18:24:41.446026   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:41.446037   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:41.446053   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:41.498760   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:41.498799   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:41.512383   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:41.512413   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:41.582469   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:41.582493   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:41.582506   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:41.658801   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:41.658836   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.195154   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:44.207904   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:44.207978   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:44.241620   77396 cri.go:89] found id: ""
	I0828 18:24:44.241649   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.241659   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:44.241667   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:44.241726   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:44.277206   77396 cri.go:89] found id: ""
	I0828 18:24:44.277238   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.277248   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:44.277254   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:44.277313   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:44.314367   77396 cri.go:89] found id: ""
	I0828 18:24:44.314397   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.314407   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:44.314415   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:44.314473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:44.356384   77396 cri.go:89] found id: ""
	I0828 18:24:44.356417   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.356429   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:44.356436   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:44.356499   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:44.388781   77396 cri.go:89] found id: ""
	I0828 18:24:44.388804   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.388812   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:44.388818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:44.388864   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:44.422896   77396 cri.go:89] found id: ""
	I0828 18:24:44.422927   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.422939   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:44.422946   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:44.423000   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:44.457218   77396 cri.go:89] found id: ""
	I0828 18:24:44.457242   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.457250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:44.457256   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:44.457315   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:44.489819   77396 cri.go:89] found id: ""
	I0828 18:24:44.489846   77396 logs.go:276] 0 containers: []
	W0828 18:24:44.489854   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:44.489874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:44.489886   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:44.526759   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:44.526789   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:44.578813   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:44.578844   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:44.592066   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:44.592105   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:44.655504   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:44.655528   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:44.655547   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:42.401964   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.901869   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:42.076081   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:44.576010   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:45.242907   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.244400   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:47.240915   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:47.253259   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:47.253324   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:47.287911   77396 cri.go:89] found id: ""
	I0828 18:24:47.287939   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.287950   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:47.287958   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:47.288017   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:47.319834   77396 cri.go:89] found id: ""
	I0828 18:24:47.319863   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.319871   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:47.319877   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:47.319947   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:47.356339   77396 cri.go:89] found id: ""
	I0828 18:24:47.356370   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.356395   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:47.356403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:47.356481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:47.388621   77396 cri.go:89] found id: ""
	I0828 18:24:47.388646   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.388656   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:47.388663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:47.388713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:47.422495   77396 cri.go:89] found id: ""
	I0828 18:24:47.422527   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.422537   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:47.422545   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:47.422614   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:47.458799   77396 cri.go:89] found id: ""
	I0828 18:24:47.458825   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.458833   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:47.458839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:47.458885   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:47.496184   77396 cri.go:89] found id: ""
	I0828 18:24:47.496215   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.496226   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:47.496233   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:47.496286   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:47.536283   77396 cri.go:89] found id: ""
	I0828 18:24:47.536311   77396 logs.go:276] 0 containers: []
	W0828 18:24:47.536322   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:47.536333   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:47.536347   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:47.588024   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:47.588056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:47.600661   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:47.600727   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:47.669096   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:47.669124   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:47.669139   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:47.753696   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:47.753725   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:46.902404   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.402357   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:46.576078   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.075244   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:49.744421   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:52.243878   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:50.293600   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:50.306623   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:50.306715   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:50.340416   77396 cri.go:89] found id: ""
	I0828 18:24:50.340448   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.340460   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:50.340468   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:50.340534   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:50.375812   77396 cri.go:89] found id: ""
	I0828 18:24:50.375843   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.375854   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:50.375861   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:50.375924   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:50.414399   77396 cri.go:89] found id: ""
	I0828 18:24:50.414426   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.414435   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:50.414444   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:50.414512   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:50.451285   77396 cri.go:89] found id: ""
	I0828 18:24:50.451316   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.451328   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:50.451336   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:50.451404   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:50.487828   77396 cri.go:89] found id: ""
	I0828 18:24:50.487852   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.487863   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:50.487871   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:50.487929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:50.520989   77396 cri.go:89] found id: ""
	I0828 18:24:50.521015   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.521023   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:50.521028   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:50.521086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:50.553231   77396 cri.go:89] found id: ""
	I0828 18:24:50.553262   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.553271   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:50.553277   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:50.553332   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:50.588612   77396 cri.go:89] found id: ""
	I0828 18:24:50.588644   77396 logs.go:276] 0 containers: []
	W0828 18:24:50.588654   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:50.588663   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:50.588674   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:50.642018   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:50.642065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:50.655887   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:50.655918   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:50.721935   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:50.721964   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:50.721980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:50.802009   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:50.802049   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:53.344650   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:53.357952   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:53.358011   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:53.393369   77396 cri.go:89] found id: ""
	I0828 18:24:53.393399   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.393408   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:53.393413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:53.393475   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:53.425918   77396 cri.go:89] found id: ""
	I0828 18:24:53.425947   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.425958   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:53.425965   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:53.426018   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:53.461827   77396 cri.go:89] found id: ""
	I0828 18:24:53.461857   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.461867   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:53.461874   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:53.461966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:53.494323   77396 cri.go:89] found id: ""
	I0828 18:24:53.494353   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.494363   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:53.494370   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:53.494430   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:53.531687   77396 cri.go:89] found id: ""
	I0828 18:24:53.531715   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.531726   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:53.531733   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:53.531789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:53.565794   77396 cri.go:89] found id: ""
	I0828 18:24:53.565819   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.565829   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:53.565838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:53.565894   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:53.601666   77396 cri.go:89] found id: ""
	I0828 18:24:53.601699   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.601710   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:53.601717   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:53.601782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:53.641268   77396 cri.go:89] found id: ""
	I0828 18:24:53.641302   77396 logs.go:276] 0 containers: []
	W0828 18:24:53.641315   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:53.641332   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:53.641363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:53.695496   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:53.695532   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:53.708691   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:53.708722   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:53.779280   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:53.779307   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:53.779320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:53.859258   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:53.859295   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:51.402746   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.403126   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:51.575165   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:53.575930   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:55.576188   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:54.243984   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.743976   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:56.403005   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:56.416305   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:56.416376   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:56.448916   77396 cri.go:89] found id: ""
	I0828 18:24:56.448944   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.448955   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:56.448962   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:56.449022   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:56.483870   77396 cri.go:89] found id: ""
	I0828 18:24:56.483897   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.483905   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:56.483910   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:56.483970   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:56.516615   77396 cri.go:89] found id: ""
	I0828 18:24:56.516642   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.516649   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:56.516655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:56.516712   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:56.551561   77396 cri.go:89] found id: ""
	I0828 18:24:56.551584   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.551591   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:56.551599   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:56.551668   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:56.586089   77396 cri.go:89] found id: ""
	I0828 18:24:56.586120   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.586130   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:56.586138   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:56.586197   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:56.617988   77396 cri.go:89] found id: ""
	I0828 18:24:56.618018   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.618028   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:56.618034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:56.618111   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:56.664493   77396 cri.go:89] found id: ""
	I0828 18:24:56.664526   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.664535   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:56.664540   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:56.664601   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:56.698191   77396 cri.go:89] found id: ""
	I0828 18:24:56.698217   77396 logs.go:276] 0 containers: []
	W0828 18:24:56.698228   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:56.698237   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:56.698251   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:56.747197   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:56.747225   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:56.760236   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:56.760262   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:56.831931   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:56.831955   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:56.831969   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:56.908578   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:56.908621   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:59.450148   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:24:59.464476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:24:59.464548   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:24:59.500934   77396 cri.go:89] found id: ""
	I0828 18:24:59.500956   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.500965   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:24:59.500970   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:24:59.501019   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:24:59.532711   77396 cri.go:89] found id: ""
	I0828 18:24:59.532740   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.532747   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:24:59.532753   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:24:59.532802   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:24:59.564974   77396 cri.go:89] found id: ""
	I0828 18:24:59.565001   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.565009   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:24:59.565016   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:24:59.565073   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:24:59.597924   77396 cri.go:89] found id: ""
	I0828 18:24:59.597957   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.597967   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:24:59.597975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:24:59.598030   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:24:59.630179   77396 cri.go:89] found id: ""
	I0828 18:24:59.630207   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.630216   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:24:59.630222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:24:59.630279   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:24:59.664755   77396 cri.go:89] found id: ""
	I0828 18:24:59.664783   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.664793   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:24:59.664800   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:24:59.664860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:24:59.701556   77396 cri.go:89] found id: ""
	I0828 18:24:59.701581   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.701590   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:24:59.701596   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:24:59.701646   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:24:59.733387   77396 cri.go:89] found id: ""
	I0828 18:24:59.733422   77396 logs.go:276] 0 containers: []
	W0828 18:24:59.733430   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:24:59.733439   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:24:59.733450   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:24:59.780962   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:24:59.780994   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:24:59.795998   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:24:59.796034   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:24:59.864864   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:24:59.864886   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:24:59.864902   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:24:59.941914   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:24:59.941957   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:24:55.901611   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:57.902218   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.902364   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:58.076387   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:00.575268   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:24:59.243885   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:01.742980   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.480133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:02.492804   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:02.492863   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:02.525573   77396 cri.go:89] found id: ""
	I0828 18:25:02.525600   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.525609   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:02.525614   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:02.525675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:02.558640   77396 cri.go:89] found id: ""
	I0828 18:25:02.558670   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.558680   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:02.558687   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:02.558746   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:02.598803   77396 cri.go:89] found id: ""
	I0828 18:25:02.598838   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.598851   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:02.598860   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:02.598931   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:02.634067   77396 cri.go:89] found id: ""
	I0828 18:25:02.634110   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.634121   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:02.634128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:02.634188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:02.671495   77396 cri.go:89] found id: ""
	I0828 18:25:02.671520   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.671529   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:02.671536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:02.671595   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:02.704478   77396 cri.go:89] found id: ""
	I0828 18:25:02.704510   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.704522   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:02.704530   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:02.704591   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:02.736799   77396 cri.go:89] found id: ""
	I0828 18:25:02.736831   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.736840   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:02.736846   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:02.736905   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:02.770820   77396 cri.go:89] found id: ""
	I0828 18:25:02.770846   77396 logs.go:276] 0 containers: []
	W0828 18:25:02.770856   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:02.770866   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:02.770885   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:02.848618   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:02.848645   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:02.848662   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:02.924704   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:02.924738   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:02.960776   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:02.960811   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:03.011600   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:03.011645   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:02.402547   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:04.903615   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:02.576294   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.075828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:03.743629   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.744476   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:08.243316   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:05.527662   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:05.540652   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:05.540737   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:05.574620   77396 cri.go:89] found id: ""
	I0828 18:25:05.574650   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.574660   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:05.574668   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:05.574729   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:05.607594   77396 cri.go:89] found id: ""
	I0828 18:25:05.607621   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.607629   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:05.607634   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:05.607691   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:05.650792   77396 cri.go:89] found id: ""
	I0828 18:25:05.650823   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.650833   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:05.650841   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:05.650909   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:05.684453   77396 cri.go:89] found id: ""
	I0828 18:25:05.684481   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.684492   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:05.684499   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:05.684564   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:05.717875   77396 cri.go:89] found id: ""
	I0828 18:25:05.717904   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.717914   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:05.717921   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:05.717980   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:05.754114   77396 cri.go:89] found id: ""
	I0828 18:25:05.754143   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.754155   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:05.754163   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:05.754220   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:05.786354   77396 cri.go:89] found id: ""
	I0828 18:25:05.786399   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.786411   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:05.786418   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:05.786473   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:05.818108   77396 cri.go:89] found id: ""
	I0828 18:25:05.818134   77396 logs.go:276] 0 containers: []
	W0828 18:25:05.818141   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:05.818149   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:05.818164   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:05.868731   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:05.868762   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:05.882333   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:05.882360   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:05.951978   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:05.952003   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:05.952015   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:06.028537   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:06.028573   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:08.567011   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:08.580607   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:08.580675   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:08.613821   77396 cri.go:89] found id: ""
	I0828 18:25:08.613847   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.613858   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:08.613865   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:08.613929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:08.648994   77396 cri.go:89] found id: ""
	I0828 18:25:08.649021   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.649030   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:08.649036   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:08.649084   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:08.680804   77396 cri.go:89] found id: ""
	I0828 18:25:08.680829   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.680837   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:08.680844   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:08.680903   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:08.717926   77396 cri.go:89] found id: ""
	I0828 18:25:08.717962   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.717973   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:08.717980   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:08.718043   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:08.751928   77396 cri.go:89] found id: ""
	I0828 18:25:08.751957   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.751967   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:08.751975   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:08.752037   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:08.791400   77396 cri.go:89] found id: ""
	I0828 18:25:08.791423   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.791432   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:08.791437   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:08.791497   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:08.828072   77396 cri.go:89] found id: ""
	I0828 18:25:08.828106   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.828118   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:08.828125   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:08.828190   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:08.881175   77396 cri.go:89] found id: ""
	I0828 18:25:08.881204   77396 logs.go:276] 0 containers: []
	W0828 18:25:08.881216   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:08.881226   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:08.881241   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:08.970432   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:08.970469   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:09.006975   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:09.007002   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:09.059881   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:09.059919   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:09.073543   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:09.073567   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:09.143468   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:07.403012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.901414   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:07.075904   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:09.077674   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:10.244567   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:12.742811   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.644356   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:11.657229   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:11.657297   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:11.695036   77396 cri.go:89] found id: ""
	I0828 18:25:11.695059   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.695067   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:11.695073   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:11.695123   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:11.726524   77396 cri.go:89] found id: ""
	I0828 18:25:11.726548   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.726556   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:11.726561   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:11.726608   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:11.759249   77396 cri.go:89] found id: ""
	I0828 18:25:11.759278   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.759289   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:11.759296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:11.759356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:11.794109   77396 cri.go:89] found id: ""
	I0828 18:25:11.794154   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.794163   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:11.794169   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:11.794221   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:11.828378   77396 cri.go:89] found id: ""
	I0828 18:25:11.828403   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.828411   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:11.828416   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:11.828470   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:11.864009   77396 cri.go:89] found id: ""
	I0828 18:25:11.864035   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.864043   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:11.864049   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:11.864108   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:11.895844   77396 cri.go:89] found id: ""
	I0828 18:25:11.895870   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.895878   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:11.895883   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:11.895932   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:11.932149   77396 cri.go:89] found id: ""
	I0828 18:25:11.932180   77396 logs.go:276] 0 containers: []
	W0828 18:25:11.932190   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:11.932208   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:11.932222   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:11.982478   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:11.982514   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:11.995466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:11.995498   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:12.058507   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:12.058531   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:12.058546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:12.138225   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:12.138260   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:14.675970   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:14.688744   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:14.688811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:14.720771   77396 cri.go:89] found id: ""
	I0828 18:25:14.720795   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.720803   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:14.720808   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:14.720855   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:14.754047   77396 cri.go:89] found id: ""
	I0828 18:25:14.754071   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.754095   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:14.754103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:14.754159   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:14.789214   77396 cri.go:89] found id: ""
	I0828 18:25:14.789244   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.789256   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:14.789263   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:14.789331   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:14.822366   77396 cri.go:89] found id: ""
	I0828 18:25:14.822399   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.822411   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:14.822419   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:14.822489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:14.855905   77396 cri.go:89] found id: ""
	I0828 18:25:14.855932   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.855942   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:14.855949   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:14.856007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:14.889492   77396 cri.go:89] found id: ""
	I0828 18:25:14.889519   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.889529   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:14.889536   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:14.889594   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:14.923892   77396 cri.go:89] found id: ""
	I0828 18:25:14.923921   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.923932   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:14.923940   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:14.923998   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:14.954979   77396 cri.go:89] found id: ""
	I0828 18:25:14.955002   77396 logs.go:276] 0 containers: []
	W0828 18:25:14.955009   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:14.955017   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:14.955029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:15.006233   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:15.006266   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:15.019702   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:15.019729   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:15.090916   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:15.090943   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:15.090959   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:15.166150   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:15.166190   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:11.902996   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.402539   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:11.574819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:13.575405   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:16.074386   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:14.743486   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.243491   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:17.703473   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:17.716353   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:17.716440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:17.750334   77396 cri.go:89] found id: ""
	I0828 18:25:17.750367   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.750376   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:17.750382   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:17.750440   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:17.783429   77396 cri.go:89] found id: ""
	I0828 18:25:17.783475   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.783488   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:17.783496   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:17.783561   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:17.819014   77396 cri.go:89] found id: ""
	I0828 18:25:17.819041   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.819052   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:17.819060   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:17.819118   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:17.856138   77396 cri.go:89] found id: ""
	I0828 18:25:17.856168   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.856179   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:17.856186   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:17.856248   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:17.891579   77396 cri.go:89] found id: ""
	I0828 18:25:17.891611   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.891619   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:17.891626   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:17.891687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:17.924709   77396 cri.go:89] found id: ""
	I0828 18:25:17.924771   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.924798   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:17.924808   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:17.924874   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:17.955875   77396 cri.go:89] found id: ""
	I0828 18:25:17.955903   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.955913   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:17.955920   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:17.955977   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:17.993827   77396 cri.go:89] found id: ""
	I0828 18:25:17.993861   77396 logs.go:276] 0 containers: []
	W0828 18:25:17.993872   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:17.993882   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:17.993897   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:18.046501   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:18.046534   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:18.060008   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:18.060040   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:18.128546   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:18.128567   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:18.128582   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:18.204859   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:18.204896   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:16.901986   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.902594   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:18.076564   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.575785   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:19.243545   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:21.244384   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:20.745360   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:20.759428   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:20.759511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:20.794748   77396 cri.go:89] found id: ""
	I0828 18:25:20.794780   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.794789   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:20.794794   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:20.794843   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:20.834595   77396 cri.go:89] found id: ""
	I0828 18:25:20.834623   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.834636   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:20.834642   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:20.834720   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:20.870609   77396 cri.go:89] found id: ""
	I0828 18:25:20.870636   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.870646   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:20.870653   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:20.870710   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:20.903739   77396 cri.go:89] found id: ""
	I0828 18:25:20.903764   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.903774   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:20.903782   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:20.903841   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:20.937331   77396 cri.go:89] found id: ""
	I0828 18:25:20.937360   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.937367   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:20.937373   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:20.937424   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:20.971140   77396 cri.go:89] found id: ""
	I0828 18:25:20.971169   77396 logs.go:276] 0 containers: []
	W0828 18:25:20.971178   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:20.971184   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:20.971231   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:21.002714   77396 cri.go:89] found id: ""
	I0828 18:25:21.002743   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.002753   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:21.002761   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:21.002833   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:21.034802   77396 cri.go:89] found id: ""
	I0828 18:25:21.034827   77396 logs.go:276] 0 containers: []
	W0828 18:25:21.034837   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:21.034848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:21.034862   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:21.091088   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:21.091128   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:21.103535   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:21.103569   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:21.177175   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:21.177202   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:21.177217   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:21.257125   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:21.257161   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:23.797074   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:23.810097   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:23.810171   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:23.843943   77396 cri.go:89] found id: ""
	I0828 18:25:23.843972   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.843984   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:23.843991   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:23.844054   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:23.879872   77396 cri.go:89] found id: ""
	I0828 18:25:23.879906   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.879918   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:23.879926   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:23.879985   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:23.914109   77396 cri.go:89] found id: ""
	I0828 18:25:23.914136   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.914145   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:23.914153   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:23.914200   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:23.952672   77396 cri.go:89] found id: ""
	I0828 18:25:23.952700   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.952708   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:23.952714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:23.952759   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:23.986813   77396 cri.go:89] found id: ""
	I0828 18:25:23.986839   77396 logs.go:276] 0 containers: []
	W0828 18:25:23.986855   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:23.986861   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:23.986917   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:24.019358   77396 cri.go:89] found id: ""
	I0828 18:25:24.019387   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.019396   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:24.019413   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:24.019487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:24.053389   77396 cri.go:89] found id: ""
	I0828 18:25:24.053415   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.053423   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:24.053429   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:24.053477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:24.086618   77396 cri.go:89] found id: ""
	I0828 18:25:24.086652   77396 logs.go:276] 0 containers: []
	W0828 18:25:24.086660   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:24.086667   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:24.086677   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:24.136243   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:24.136277   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:24.150031   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:24.150071   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:24.229689   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:24.229729   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:24.229746   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:24.307152   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:24.307197   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:20.902691   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.401748   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:22.575828   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.075159   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:23.743296   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:25.743656   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.243947   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:26.844828   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:26.858915   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:26.858989   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:26.896094   77396 cri.go:89] found id: ""
	I0828 18:25:26.896123   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.896132   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:26.896138   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:26.896187   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:26.934896   77396 cri.go:89] found id: ""
	I0828 18:25:26.934925   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.934936   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:26.934944   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:26.935007   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:26.967673   77396 cri.go:89] found id: ""
	I0828 18:25:26.967700   77396 logs.go:276] 0 containers: []
	W0828 18:25:26.967708   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:26.967714   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:26.967780   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:27.000095   77396 cri.go:89] found id: ""
	I0828 18:25:27.000124   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.000133   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:27.000140   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:27.000192   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:27.038158   77396 cri.go:89] found id: ""
	I0828 18:25:27.038186   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.038195   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:27.038201   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:27.038253   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:27.073606   77396 cri.go:89] found id: ""
	I0828 18:25:27.073634   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.073649   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:27.073657   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:27.073713   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:27.105139   77396 cri.go:89] found id: ""
	I0828 18:25:27.105163   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.105176   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:27.105182   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:27.105235   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:27.137985   77396 cri.go:89] found id: ""
	I0828 18:25:27.138014   77396 logs.go:276] 0 containers: []
	W0828 18:25:27.138025   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:27.138036   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:27.138055   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:27.187983   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:27.188018   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:27.200260   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:27.200286   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:27.273005   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:27.273026   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:27.273038   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:27.353333   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:27.353375   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:29.890515   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:29.903924   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:29.903994   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:29.936189   77396 cri.go:89] found id: ""
	I0828 18:25:29.936221   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.936231   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:29.936240   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:29.936354   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:29.968319   77396 cri.go:89] found id: ""
	I0828 18:25:29.968349   77396 logs.go:276] 0 containers: []
	W0828 18:25:29.968359   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:29.968366   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:29.968436   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:30.001331   77396 cri.go:89] found id: ""
	I0828 18:25:30.001358   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.001383   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:30.001391   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:30.001477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:30.035610   77396 cri.go:89] found id: ""
	I0828 18:25:30.035634   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.035642   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:30.035648   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:30.035695   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:30.067304   77396 cri.go:89] found id: ""
	I0828 18:25:30.067335   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.067346   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:30.067354   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:30.067429   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:30.105020   77396 cri.go:89] found id: ""
	I0828 18:25:30.105049   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.105057   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:30.105063   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:30.105126   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:30.142048   77396 cri.go:89] found id: ""
	I0828 18:25:30.142097   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.142110   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:30.142117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:30.142180   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:30.173099   77396 cri.go:89] found id: ""
	I0828 18:25:30.173131   77396 logs.go:276] 0 containers: []
	W0828 18:25:30.173140   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:30.173149   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:30.173166   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:25:25.901875   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:28.401339   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.402248   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:27.076181   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:29.575216   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:30.743526   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:33.242940   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	W0828 18:25:30.238946   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:30.238968   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:30.238980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:30.320484   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:30.320523   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:30.360028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:30.360056   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:30.412663   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:30.412697   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:32.927100   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:32.940555   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:32.940636   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:32.973182   77396 cri.go:89] found id: ""
	I0828 18:25:32.973221   77396 logs.go:276] 0 containers: []
	W0828 18:25:32.973233   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:32.973242   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:32.973303   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:33.006096   77396 cri.go:89] found id: ""
	I0828 18:25:33.006125   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.006134   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:33.006139   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:33.006191   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:33.038430   77396 cri.go:89] found id: ""
	I0828 18:25:33.038461   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.038472   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:33.038480   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:33.038542   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:33.070266   77396 cri.go:89] found id: ""
	I0828 18:25:33.070294   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.070303   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:33.070315   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:33.070375   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:33.105248   77396 cri.go:89] found id: ""
	I0828 18:25:33.105278   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.105289   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:33.105296   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:33.105356   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:33.136507   77396 cri.go:89] found id: ""
	I0828 18:25:33.136540   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.136551   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:33.136559   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:33.136618   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:33.167333   77396 cri.go:89] found id: ""
	I0828 18:25:33.167359   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.167370   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:33.167377   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:33.167442   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:33.201302   77396 cri.go:89] found id: ""
	I0828 18:25:33.201331   77396 logs.go:276] 0 containers: []
	W0828 18:25:33.201343   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:33.201352   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:33.201364   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:33.213335   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:33.213361   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:33.278269   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:33.278296   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:33.278310   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:33.357015   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:33.357048   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:33.401463   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:33.401495   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:32.402583   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.402749   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:32.075671   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:34.575951   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.743215   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.243081   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:35.952911   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:35.965925   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:35.965990   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:36.001656   77396 cri.go:89] found id: ""
	I0828 18:25:36.001693   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.001705   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:36.001713   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:36.001784   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:36.035010   77396 cri.go:89] found id: ""
	I0828 18:25:36.035037   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.035045   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:36.035050   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:36.035099   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:36.069113   77396 cri.go:89] found id: ""
	I0828 18:25:36.069148   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.069158   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:36.069164   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:36.069219   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:36.106200   77396 cri.go:89] found id: ""
	I0828 18:25:36.106230   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.106240   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:36.106248   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:36.106316   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:36.138428   77396 cri.go:89] found id: ""
	I0828 18:25:36.138457   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.138468   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:36.138475   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:36.138559   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:36.170084   77396 cri.go:89] found id: ""
	I0828 18:25:36.170112   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.170122   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:36.170128   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:36.170188   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:36.202180   77396 cri.go:89] found id: ""
	I0828 18:25:36.202205   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.202215   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:36.202222   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:36.202285   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:36.236125   77396 cri.go:89] found id: ""
	I0828 18:25:36.236156   77396 logs.go:276] 0 containers: []
	W0828 18:25:36.236167   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:36.236179   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:36.236193   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:36.274230   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:36.274256   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:36.325505   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:36.325546   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:36.338714   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:36.338741   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:36.406404   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:36.406432   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:36.406448   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:38.981942   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:38.995287   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:38.995357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:39.028250   77396 cri.go:89] found id: ""
	I0828 18:25:39.028275   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.028282   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:39.028289   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:39.028335   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:39.061402   77396 cri.go:89] found id: ""
	I0828 18:25:39.061434   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.061444   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:39.061449   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:39.061501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:39.095672   77396 cri.go:89] found id: ""
	I0828 18:25:39.095704   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.095716   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:39.095729   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:39.095789   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:39.130135   77396 cri.go:89] found id: ""
	I0828 18:25:39.130162   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.130170   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:39.130176   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:39.130239   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:39.168529   77396 cri.go:89] found id: ""
	I0828 18:25:39.168560   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.168571   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:39.168578   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:39.168641   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:39.200786   77396 cri.go:89] found id: ""
	I0828 18:25:39.200813   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.200821   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:39.200828   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:39.200876   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:39.232855   77396 cri.go:89] found id: ""
	I0828 18:25:39.232886   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.232894   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:39.232902   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:39.232966   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:39.267241   77396 cri.go:89] found id: ""
	I0828 18:25:39.267273   77396 logs.go:276] 0 containers: []
	W0828 18:25:39.267284   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:39.267294   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:39.267309   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:39.306023   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:39.306061   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:39.357880   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:39.357931   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:39.370886   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:39.370914   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:39.448130   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:39.448151   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:39.448163   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:36.403245   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:38.902238   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:37.075570   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:39.076792   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:40.243633   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.244395   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:42.027111   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:42.039611   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:42.039687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:42.078052   77396 cri.go:89] found id: ""
	I0828 18:25:42.078093   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.078104   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:42.078111   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:42.078169   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:42.112812   77396 cri.go:89] found id: ""
	I0828 18:25:42.112842   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.112851   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:42.112856   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:42.112902   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:42.146846   77396 cri.go:89] found id: ""
	I0828 18:25:42.146875   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.146884   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:42.146891   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:42.146948   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:42.179311   77396 cri.go:89] found id: ""
	I0828 18:25:42.179344   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.179352   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:42.179358   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:42.179422   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:42.212149   77396 cri.go:89] found id: ""
	I0828 18:25:42.212179   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.212192   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:42.212200   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:42.212254   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:42.248322   77396 cri.go:89] found id: ""
	I0828 18:25:42.248358   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.248369   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:42.248382   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:42.248496   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:42.283212   77396 cri.go:89] found id: ""
	I0828 18:25:42.283241   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.283250   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:42.283257   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:42.283318   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:42.327064   77396 cri.go:89] found id: ""
	I0828 18:25:42.327099   77396 logs.go:276] 0 containers: []
	W0828 18:25:42.327110   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:42.327121   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:42.327135   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:42.378545   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:42.378577   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:42.392020   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:42.392045   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:42.464531   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:42.464553   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:42.464564   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:42.543116   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:42.543162   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:45.083935   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:45.096434   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:45.096501   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:45.130059   77396 cri.go:89] found id: ""
	I0828 18:25:45.130098   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.130110   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:45.130117   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:45.130176   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:45.160982   77396 cri.go:89] found id: ""
	I0828 18:25:45.161011   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.161021   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:45.161028   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:45.161086   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:45.191416   77396 cri.go:89] found id: ""
	I0828 18:25:45.191449   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.191460   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:45.191467   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:45.191524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:41.401456   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:43.401666   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.401772   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:41.575819   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.075020   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:44.743053   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:47.242714   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:45.223315   77396 cri.go:89] found id: ""
	I0828 18:25:45.223344   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.223360   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:45.223368   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:45.223421   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:45.255404   77396 cri.go:89] found id: ""
	I0828 18:25:45.255428   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.255435   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:45.255441   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:45.255487   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:45.294671   77396 cri.go:89] found id: ""
	I0828 18:25:45.294705   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.294716   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:45.294724   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:45.294811   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:45.329148   77396 cri.go:89] found id: ""
	I0828 18:25:45.329174   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.329186   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:45.329191   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:45.329249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:45.361976   77396 cri.go:89] found id: ""
	I0828 18:25:45.362007   77396 logs.go:276] 0 containers: []
	W0828 18:25:45.362018   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:45.362028   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:45.362041   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:45.412495   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:45.412530   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:45.425268   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:45.425302   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:45.493451   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:45.493475   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:45.493489   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:45.571427   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:45.571472   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.108133   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:48.120632   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:48.120699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:48.156933   77396 cri.go:89] found id: ""
	I0828 18:25:48.156963   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.156973   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:48.156981   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:48.157045   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:48.188436   77396 cri.go:89] found id: ""
	I0828 18:25:48.188465   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.188473   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:48.188479   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:48.188524   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:48.219558   77396 cri.go:89] found id: ""
	I0828 18:25:48.219588   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.219598   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:48.219605   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:48.219661   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:48.252872   77396 cri.go:89] found id: ""
	I0828 18:25:48.252901   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.252917   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:48.252923   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:48.252975   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:48.288244   77396 cri.go:89] found id: ""
	I0828 18:25:48.288273   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.288283   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:48.288291   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:48.288355   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:48.325077   77396 cri.go:89] found id: ""
	I0828 18:25:48.325114   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.325126   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:48.325134   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:48.325195   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:48.358163   77396 cri.go:89] found id: ""
	I0828 18:25:48.358191   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.358202   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:48.358210   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:48.358259   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:48.409246   77396 cri.go:89] found id: ""
	I0828 18:25:48.409277   77396 logs.go:276] 0 containers: []
	W0828 18:25:48.409287   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:48.409299   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:48.409314   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:48.425228   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:48.425259   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:48.493169   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:48.493188   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:48.493201   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:48.573486   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:48.573524   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:48.615846   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:48.615879   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:47.901530   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.901707   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:46.574662   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:48.575614   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.075530   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:49.244444   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.744518   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:51.165546   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:51.178743   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:51.178807   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:51.214299   77396 cri.go:89] found id: ""
	I0828 18:25:51.214329   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.214340   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:51.214349   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:51.214426   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:51.247057   77396 cri.go:89] found id: ""
	I0828 18:25:51.247086   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.247096   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:51.247103   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:51.247174   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:51.279381   77396 cri.go:89] found id: ""
	I0828 18:25:51.279413   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.279423   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:51.279430   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:51.279492   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:51.314237   77396 cri.go:89] found id: ""
	I0828 18:25:51.314266   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.314277   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:51.314286   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:51.314352   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:51.347496   77396 cri.go:89] found id: ""
	I0828 18:25:51.347518   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.347526   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:51.347532   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:51.347578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:51.381705   77396 cri.go:89] found id: ""
	I0828 18:25:51.381742   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.381753   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:51.381762   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:51.381816   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:51.413157   77396 cri.go:89] found id: ""
	I0828 18:25:51.413186   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.413196   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:51.413203   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:51.413261   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:51.443228   77396 cri.go:89] found id: ""
	I0828 18:25:51.443251   77396 logs.go:276] 0 containers: []
	W0828 18:25:51.443266   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:51.443274   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:51.443287   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:51.490927   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:51.490961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:51.505308   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:51.505334   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:51.572077   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:51.572109   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:51.572125   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:51.658398   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:51.658441   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:54.199638   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:54.213449   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:54.213525   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:54.249698   77396 cri.go:89] found id: ""
	I0828 18:25:54.249720   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.249727   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:54.249733   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:54.249782   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:54.285235   77396 cri.go:89] found id: ""
	I0828 18:25:54.285267   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.285279   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:54.285287   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:54.285344   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:54.322535   77396 cri.go:89] found id: ""
	I0828 18:25:54.322562   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.322571   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:54.322577   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:54.322640   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:54.357995   77396 cri.go:89] found id: ""
	I0828 18:25:54.358025   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.358036   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:54.358045   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:54.358129   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:54.391112   77396 cri.go:89] found id: ""
	I0828 18:25:54.391137   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.391145   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:54.391150   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:54.391213   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:54.424248   77396 cri.go:89] found id: ""
	I0828 18:25:54.424278   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.424288   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:54.424295   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:54.424357   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:54.456529   77396 cri.go:89] found id: ""
	I0828 18:25:54.456553   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.456561   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:54.456566   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:54.456619   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:54.489226   77396 cri.go:89] found id: ""
	I0828 18:25:54.489251   77396 logs.go:276] 0 containers: []
	W0828 18:25:54.489259   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:54.489268   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:54.489283   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:54.544282   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:54.544318   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:54.557511   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:54.557549   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:54.631057   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:54.631081   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:54.631096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:54.711874   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:54.711910   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:51.902237   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.402216   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:53.076058   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:55.577768   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:54.244062   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:56.244857   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:57.251826   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:25:57.264806   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:25:57.264872   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:25:57.300005   77396 cri.go:89] found id: ""
	I0828 18:25:57.300031   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.300041   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:25:57.300049   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:25:57.300128   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:25:57.333070   77396 cri.go:89] found id: ""
	I0828 18:25:57.333099   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.333110   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:25:57.333117   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:25:57.333181   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:25:57.369343   77396 cri.go:89] found id: ""
	I0828 18:25:57.369372   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.369390   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:25:57.369398   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:25:57.369462   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:25:57.401729   77396 cri.go:89] found id: ""
	I0828 18:25:57.401756   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.401764   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:25:57.401770   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:25:57.401824   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:25:57.432890   77396 cri.go:89] found id: ""
	I0828 18:25:57.432914   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.432921   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:25:57.432927   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:25:57.432973   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:25:57.467572   77396 cri.go:89] found id: ""
	I0828 18:25:57.467596   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.467604   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:25:57.467609   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:25:57.467663   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:25:57.500316   77396 cri.go:89] found id: ""
	I0828 18:25:57.500344   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.500351   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:25:57.500357   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:25:57.500411   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:25:57.531676   77396 cri.go:89] found id: ""
	I0828 18:25:57.531700   77396 logs.go:276] 0 containers: []
	W0828 18:25:57.531708   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:25:57.531716   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:25:57.531728   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:25:57.604613   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:25:57.604639   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:25:57.604653   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:25:57.684622   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:25:57.684658   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:25:57.720566   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:25:57.720656   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:25:57.770832   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:25:57.770866   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:25:56.902012   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:59.402189   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.075045   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.575328   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:25:58.743586   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:00.743675   76435 pod_ready.go:103] pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:01.737703   76435 pod_ready.go:82] duration metric: took 4m0.000480749s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:01.737748   76435 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-f56j2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0828 18:26:01.737772   76435 pod_ready.go:39] duration metric: took 4m13.763880094s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:01.737804   76435 kubeadm.go:597] duration metric: took 4m22.607627094s to restartPrimaryControlPlane
	W0828 18:26:01.737875   76435 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:01.737908   76435 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:00.283493   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:00.296500   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:00.296578   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:00.334395   77396 cri.go:89] found id: ""
	I0828 18:26:00.334420   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.334428   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:00.334434   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:00.334481   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:00.369178   77396 cri.go:89] found id: ""
	I0828 18:26:00.369205   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.369214   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:00.369219   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:00.369283   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:00.405962   77396 cri.go:89] found id: ""
	I0828 18:26:00.405990   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.406000   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:00.406007   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:00.406064   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:00.438684   77396 cri.go:89] found id: ""
	I0828 18:26:00.438717   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.438728   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:00.438735   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:00.438795   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:00.472357   77396 cri.go:89] found id: ""
	I0828 18:26:00.472385   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.472397   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:00.472403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:00.472450   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:00.506891   77396 cri.go:89] found id: ""
	I0828 18:26:00.506920   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.506931   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:00.506938   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:00.506999   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:00.546387   77396 cri.go:89] found id: ""
	I0828 18:26:00.546413   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.546422   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:00.546427   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:00.546474   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:00.598714   77396 cri.go:89] found id: ""
	I0828 18:26:00.598745   77396 logs.go:276] 0 containers: []
	W0828 18:26:00.598753   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:00.598761   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:00.598779   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:00.617100   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:00.617130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:00.687317   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:00.687348   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:00.687363   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:00.770097   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:00.770130   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:00.815848   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:00.815883   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:03.365469   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:03.379117   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:03.379182   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:03.414122   77396 cri.go:89] found id: ""
	I0828 18:26:03.414148   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.414155   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:03.414161   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:03.414208   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:03.446953   77396 cri.go:89] found id: ""
	I0828 18:26:03.446975   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.446983   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:03.446988   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:03.447036   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:03.481034   77396 cri.go:89] found id: ""
	I0828 18:26:03.481059   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.481067   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:03.481072   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:03.481120   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:03.514785   77396 cri.go:89] found id: ""
	I0828 18:26:03.514814   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.514824   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:03.514832   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:03.514888   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:03.548302   77396 cri.go:89] found id: ""
	I0828 18:26:03.548330   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.548340   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:03.548348   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:03.548423   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:03.582430   77396 cri.go:89] found id: ""
	I0828 18:26:03.582460   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.582469   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:03.582476   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:03.582529   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:03.615108   77396 cri.go:89] found id: ""
	I0828 18:26:03.615136   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.615144   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:03.615149   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:03.615205   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:03.647282   77396 cri.go:89] found id: ""
	I0828 18:26:03.647312   77396 logs.go:276] 0 containers: []
	W0828 18:26:03.647321   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:03.647330   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:03.647340   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:03.660466   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:03.660500   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:03.732746   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:03.732767   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:03.732780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:03.811286   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:03.811320   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:03.848482   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:03.848513   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:01.402393   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.402670   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.403016   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:03.075650   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:05.574825   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:06.400122   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:06.412839   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:06.412908   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:06.448570   77396 cri.go:89] found id: ""
	I0828 18:26:06.448597   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.448608   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:06.448620   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:06.448687   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:06.482446   77396 cri.go:89] found id: ""
	I0828 18:26:06.482476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.482487   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:06.482495   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:06.482555   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:06.514640   77396 cri.go:89] found id: ""
	I0828 18:26:06.514669   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.514679   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:06.514686   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:06.514747   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:06.548997   77396 cri.go:89] found id: ""
	I0828 18:26:06.549020   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.549028   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:06.549034   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:06.549079   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:06.583557   77396 cri.go:89] found id: ""
	I0828 18:26:06.583582   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.583589   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:06.583595   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:06.583665   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:06.617447   77396 cri.go:89] found id: ""
	I0828 18:26:06.617476   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.617484   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:06.617490   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:06.617549   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:06.650387   77396 cri.go:89] found id: ""
	I0828 18:26:06.650419   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.650427   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:06.650433   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:06.650489   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:06.682851   77396 cri.go:89] found id: ""
	I0828 18:26:06.682879   77396 logs.go:276] 0 containers: []
	W0828 18:26:06.682888   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:06.682899   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:06.682961   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:06.695365   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:06.695392   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:06.760214   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:06.760245   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:06.760261   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:06.839827   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:06.839863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:06.877298   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:06.877325   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.430694   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:09.443043   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:09.443115   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:09.476557   77396 cri.go:89] found id: ""
	I0828 18:26:09.476583   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.476594   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:09.476602   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:09.476659   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:09.514909   77396 cri.go:89] found id: ""
	I0828 18:26:09.514935   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.514943   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:09.514948   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:09.515009   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:09.549769   77396 cri.go:89] found id: ""
	I0828 18:26:09.549800   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.549810   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:09.549818   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:09.549868   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:09.582793   77396 cri.go:89] found id: ""
	I0828 18:26:09.582821   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.582831   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:09.582838   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:09.582896   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:09.615603   77396 cri.go:89] found id: ""
	I0828 18:26:09.615636   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.615648   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:09.615655   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:09.615716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:09.650046   77396 cri.go:89] found id: ""
	I0828 18:26:09.650087   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.650100   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:09.650108   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:09.650161   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:09.681726   77396 cri.go:89] found id: ""
	I0828 18:26:09.681754   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.681763   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:09.681768   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:09.681821   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:09.713008   77396 cri.go:89] found id: ""
	I0828 18:26:09.713036   77396 logs.go:276] 0 containers: []
	W0828 18:26:09.713045   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:09.713054   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:09.713065   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:09.792720   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:09.792757   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:09.831752   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:09.831785   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:09.880877   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:09.880913   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:09.896178   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:09.896215   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:09.962282   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:07.901074   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:09.905185   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:08.074185   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:10.075331   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.462957   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:12.475266   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:12.475345   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:12.508364   77396 cri.go:89] found id: ""
	I0828 18:26:12.508394   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.508405   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:12.508413   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:12.508472   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:12.544152   77396 cri.go:89] found id: ""
	I0828 18:26:12.544185   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.544197   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:12.544204   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:12.544264   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:12.578358   77396 cri.go:89] found id: ""
	I0828 18:26:12.578384   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.578394   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:12.578403   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:12.578456   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:12.609183   77396 cri.go:89] found id: ""
	I0828 18:26:12.609206   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.609214   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:12.609219   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:12.609292   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:12.641791   77396 cri.go:89] found id: ""
	I0828 18:26:12.641816   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.641824   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:12.641830   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:12.641887   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:12.673857   77396 cri.go:89] found id: ""
	I0828 18:26:12.673881   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.673889   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:12.673894   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:12.673938   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:12.709501   77396 cri.go:89] found id: ""
	I0828 18:26:12.709525   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.709532   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:12.709538   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:12.709585   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:12.742972   77396 cri.go:89] found id: ""
	I0828 18:26:12.742994   77396 logs.go:276] 0 containers: []
	W0828 18:26:12.743002   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:12.743010   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:12.743026   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:12.813949   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:12.813969   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:12.813980   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:12.894829   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:12.894873   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:12.939533   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:12.939565   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:12.990319   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:12.990358   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:12.404061   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:14.902346   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:12.575908   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.075489   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:15.503923   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:15.518161   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:15.518240   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:15.564145   77396 cri.go:89] found id: ""
	I0828 18:26:15.564173   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.564182   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:15.564189   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:15.564249   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:15.600654   77396 cri.go:89] found id: ""
	I0828 18:26:15.600682   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.600692   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:15.600699   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:15.600760   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:15.633089   77396 cri.go:89] found id: ""
	I0828 18:26:15.633122   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.633131   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:15.633137   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:15.633186   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:15.667339   77396 cri.go:89] found id: ""
	I0828 18:26:15.667370   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.667382   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:15.667389   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:15.667451   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:15.699463   77396 cri.go:89] found id: ""
	I0828 18:26:15.699499   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.699508   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:15.699513   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:15.699573   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:15.735841   77396 cri.go:89] found id: ""
	I0828 18:26:15.735866   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.735873   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:15.735879   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:15.735929   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:15.771111   77396 cri.go:89] found id: ""
	I0828 18:26:15.771135   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.771142   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:15.771148   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:15.771198   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:15.804845   77396 cri.go:89] found id: ""
	I0828 18:26:15.804868   77396 logs.go:276] 0 containers: []
	W0828 18:26:15.804875   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:15.804884   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:15.804894   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:15.856744   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:15.856780   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:15.869496   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:15.869520   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:15.938957   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:15.938982   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:15.938998   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:16.016482   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:16.016525   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:18.554851   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:18.568241   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.568317   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.601401   77396 cri.go:89] found id: ""
	I0828 18:26:18.601439   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.601448   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:18.601454   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.601511   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.634784   77396 cri.go:89] found id: ""
	I0828 18:26:18.634809   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.634816   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:18.634822   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.634875   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:18.666540   77396 cri.go:89] found id: ""
	I0828 18:26:18.666572   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.666584   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:18.666591   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:18.666643   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:18.699180   77396 cri.go:89] found id: ""
	I0828 18:26:18.699210   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.699221   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:18.699228   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:18.699289   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:18.735001   77396 cri.go:89] found id: ""
	I0828 18:26:18.735032   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.735042   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:18.735050   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:18.735116   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:18.767404   77396 cri.go:89] found id: ""
	I0828 18:26:18.767441   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.767454   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:18.767472   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:18.767537   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:18.798857   77396 cri.go:89] found id: ""
	I0828 18:26:18.798881   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.798890   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:18.798896   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:18.798942   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:18.830113   77396 cri.go:89] found id: ""
	I0828 18:26:18.830137   77396 logs.go:276] 0 containers: []
	W0828 18:26:18.830145   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:18.830153   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:18.830165   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:18.843161   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:18.843188   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:18.910736   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:18.910760   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:18.910775   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:18.991698   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:18.991734   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.038896   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.038929   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:17.402193   76486 pod_ready.go:103] pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:18.902692   76486 pod_ready.go:82] duration metric: took 4m0.007006782s for pod "metrics-server-6867b74b74-lccm2" in "kube-system" namespace to be "Ready" ...
	E0828 18:26:18.902716   76486 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:26:18.902724   76486 pod_ready.go:39] duration metric: took 4m4.058254547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:18.902739   76486 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:18.902762   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:18.902819   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:18.954071   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:18.954115   76486 cri.go:89] found id: ""
	I0828 18:26:18.954123   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:18.954183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.958270   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:18.958345   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:18.994068   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:18.994105   76486 cri.go:89] found id: ""
	I0828 18:26:18.994116   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:18.994173   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:18.998807   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:18.998881   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:19.050622   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:19.050649   76486 cri.go:89] found id: ""
	I0828 18:26:19.050657   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:19.050738   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.055283   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:19.055340   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:19.093254   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.093280   76486 cri.go:89] found id: ""
	I0828 18:26:19.093288   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:19.093341   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.097062   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:19.097118   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:19.135962   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.135989   76486 cri.go:89] found id: ""
	I0828 18:26:19.135999   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:19.136046   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.140440   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:19.140510   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:19.176913   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.176942   76486 cri.go:89] found id: ""
	I0828 18:26:19.176951   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:19.177007   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.180742   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:19.180794   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:19.218796   76486 cri.go:89] found id: ""
	I0828 18:26:19.218821   76486 logs.go:276] 0 containers: []
	W0828 18:26:19.218832   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:19.218839   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:19.218898   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:19.253110   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:19.253134   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.253140   76486 cri.go:89] found id: ""
	I0828 18:26:19.253148   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:19.253205   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.257338   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:19.261148   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:19.261173   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:19.299620   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:19.299659   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:19.337533   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:19.337560   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:19.836298   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:19.836350   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:19.881132   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:19.881168   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:19.921986   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:19.922023   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:19.975419   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:19.975455   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:20.045848   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:20.045895   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:20.059683   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:20.059715   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:20.186442   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:20.186472   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:20.233152   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:20.233187   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:20.278546   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:20.278575   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:20.325985   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:20.326015   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:17.075945   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:19.076890   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:21.590663   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:21.602796   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:21.602860   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:21.635583   77396 cri.go:89] found id: ""
	I0828 18:26:21.635612   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.635623   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:26:21.635631   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:21.635699   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:21.666982   77396 cri.go:89] found id: ""
	I0828 18:26:21.667023   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.667034   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:26:21.667041   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:21.667098   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:21.698817   77396 cri.go:89] found id: ""
	I0828 18:26:21.698851   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.698862   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:26:21.698870   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:21.698925   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:21.729618   77396 cri.go:89] found id: ""
	I0828 18:26:21.729645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.729654   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:26:21.729660   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:21.729718   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:21.763188   77396 cri.go:89] found id: ""
	I0828 18:26:21.763214   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.763222   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:26:21.763227   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:21.763272   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:21.795613   77396 cri.go:89] found id: ""
	I0828 18:26:21.795645   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.795656   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:26:21.795663   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:21.795716   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:21.828271   77396 cri.go:89] found id: ""
	I0828 18:26:21.828299   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.828308   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:21.828314   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:26:21.828358   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:26:21.860098   77396 cri.go:89] found id: ""
	I0828 18:26:21.860124   77396 logs.go:276] 0 containers: []
	W0828 18:26:21.860132   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:26:21.860141   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:21.860155   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:21.908269   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:21.908308   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:21.921123   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:21.921149   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:26:21.985059   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:26:21.985078   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:21.985091   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:22.065705   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:26:22.065745   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:24.608061   77396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:24.621768   77396 kubeadm.go:597] duration metric: took 4m4.233964466s to restartPrimaryControlPlane
	W0828 18:26:24.621838   77396 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0828 18:26:24.621863   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:26:22.860616   76486 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:22.877760   76486 api_server.go:72] duration metric: took 4m15.760769788s to wait for apiserver process to appear ...
	I0828 18:26:22.877790   76486 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:22.877829   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:22.877891   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:22.924739   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:22.924763   76486 cri.go:89] found id: ""
	I0828 18:26:22.924772   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:22.924831   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.928747   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:22.928810   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:22.967171   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:22.967193   76486 cri.go:89] found id: ""
	I0828 18:26:22.967200   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:22.967247   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:22.970989   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:22.971048   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:23.004804   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.004830   76486 cri.go:89] found id: ""
	I0828 18:26:23.004839   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:23.004895   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.008551   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:23.008616   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:23.041475   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.041496   76486 cri.go:89] found id: ""
	I0828 18:26:23.041504   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:23.041562   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.045265   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:23.045321   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:23.078749   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.078772   76486 cri.go:89] found id: ""
	I0828 18:26:23.078781   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:23.078827   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.082647   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:23.082712   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:23.117104   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.117128   76486 cri.go:89] found id: ""
	I0828 18:26:23.117138   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:23.117196   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.121011   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:23.121066   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:23.154564   76486 cri.go:89] found id: ""
	I0828 18:26:23.154592   76486 logs.go:276] 0 containers: []
	W0828 18:26:23.154614   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:23.154626   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:23.154689   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:23.192082   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.192101   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.192106   76486 cri.go:89] found id: ""
	I0828 18:26:23.192114   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:23.192175   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.196183   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:23.199786   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:23.199814   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:23.241986   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:23.242019   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:23.276718   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:23.276750   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:23.353187   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:23.353224   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:23.366901   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:23.366937   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:23.403147   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:23.403181   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:23.440461   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:23.440491   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:23.476039   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:23.476067   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:23.524702   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:23.524743   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:23.558484   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:23.558510   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:23.994897   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:23.994933   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:24.091558   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:24.091591   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:24.133767   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:24.133801   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:21.575113   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:23.576760   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:26.075770   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:27.939212   76435 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.201267084s)
	I0828 18:26:27.939337   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:27.964796   76435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:27.978456   76435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:27.988580   76435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:27.988599   76435 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:27.988640   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.008900   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.008955   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.020342   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.032723   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.032784   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.049205   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.058740   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.058803   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.067969   76435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.078089   76435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.078145   76435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.086950   76435 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.136931   76435 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 18:26:28.137117   76435 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:26:28.249761   76435 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:26:28.249900   76435 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:26:28.250020   76435 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 18:26:28.258994   76435 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:26:28.261527   76435 out.go:235]   - Generating certificates and keys ...
	I0828 18:26:28.261644   76435 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:26:28.261732   76435 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:26:28.261848   76435 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:26:28.261939   76435 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:26:28.262038   76435 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:26:28.262155   76435 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:26:28.262254   76435 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:26:28.262338   76435 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:26:28.262452   76435 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:26:28.262557   76435 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:26:28.262635   76435 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:26:28.262731   76435 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:26:28.434898   76435 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:26:28.833039   76435 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 18:26:28.930840   76435 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:26:29.103123   76435 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:26:29.201561   76435 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:26:29.202039   76435 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:26:29.204545   76435 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:26:28.691092   77396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.069202982s)
	I0828 18:26:28.691158   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:28.705352   77396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 18:26:28.715421   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:26:28.724698   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:26:28.724718   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:26:28.724771   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:26:28.733594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:26:28.733676   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:26:28.742759   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:26:28.752127   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:26:28.752187   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:26:28.761279   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.770451   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:26:28.770518   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:26:28.779635   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:26:28.788337   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:26:28.788405   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:26:28.797794   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:26:28.997476   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:26.682052   76486 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8444/healthz ...
	I0828 18:26:26.687081   76486 api_server.go:279] https://192.168.39.226:8444/healthz returned 200:
	ok
	I0828 18:26:26.687992   76486 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:26.688008   76486 api_server.go:131] duration metric: took 3.810212378s to wait for apiserver health ...
	I0828 18:26:26.688016   76486 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:26.688038   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:26:26.688084   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:26:26.729049   76486 cri.go:89] found id: "d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:26.729072   76486 cri.go:89] found id: ""
	I0828 18:26:26.729080   76486 logs.go:276] 1 containers: [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342]
	I0828 18:26:26.729127   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.733643   76486 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:26:26.733710   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:26:26.774655   76486 cri.go:89] found id: "3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:26.774675   76486 cri.go:89] found id: ""
	I0828 18:26:26.774682   76486 logs.go:276] 1 containers: [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143]
	I0828 18:26:26.774732   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.778654   76486 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:26:26.778704   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:26:26.812844   76486 cri.go:89] found id: "93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:26.812870   76486 cri.go:89] found id: ""
	I0828 18:26:26.812878   76486 logs.go:276] 1 containers: [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db]
	I0828 18:26:26.812928   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.816783   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:26:26.816847   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:26:26.856925   76486 cri.go:89] found id: "101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:26.856945   76486 cri.go:89] found id: ""
	I0828 18:26:26.856957   76486 logs.go:276] 1 containers: [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12]
	I0828 18:26:26.857013   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.860845   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:26:26.860906   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:26:26.893850   76486 cri.go:89] found id: "729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:26.893873   76486 cri.go:89] found id: ""
	I0828 18:26:26.893882   76486 logs.go:276] 1 containers: [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee]
	I0828 18:26:26.893940   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.897799   76486 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:26:26.897875   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:26:26.932914   76486 cri.go:89] found id: "1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:26.932936   76486 cri.go:89] found id: ""
	I0828 18:26:26.932942   76486 logs.go:276] 1 containers: [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286]
	I0828 18:26:26.932993   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:26.937185   76486 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:26:26.937253   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:26:26.980339   76486 cri.go:89] found id: ""
	I0828 18:26:26.980368   76486 logs.go:276] 0 containers: []
	W0828 18:26:26.980379   76486 logs.go:278] No container was found matching "kindnet"
	I0828 18:26:26.980386   76486 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:26:26.980458   76486 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:26:27.014870   76486 cri.go:89] found id: "02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.014889   76486 cri.go:89] found id: "48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.014893   76486 cri.go:89] found id: ""
	I0828 18:26:27.014899   76486 logs.go:276] 2 containers: [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80]
	I0828 18:26:27.014954   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.018782   76486 ssh_runner.go:195] Run: which crictl
	I0828 18:26:27.022146   76486 logs.go:123] Gathering logs for etcd [3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143] ...
	I0828 18:26:27.022167   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3895a4d3fb7d0ffcb27cdb94552eea17fde189f7d2143b772039a906cc171143"
	I0828 18:26:27.062244   76486 logs.go:123] Gathering logs for kube-scheduler [101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12] ...
	I0828 18:26:27.062271   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 101c4701cc8608428b61a86e09a8649123a6ff5b80eef502ce59a2977fbcce12"
	I0828 18:26:27.097495   76486 logs.go:123] Gathering logs for kube-controller-manager [1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286] ...
	I0828 18:26:27.097528   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d1212a86ca9a98513d58df15fcb64f91e0e0db4346c0303bf0dd91422a10286"
	I0828 18:26:27.150300   76486 logs.go:123] Gathering logs for storage-provisioner [02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522] ...
	I0828 18:26:27.150342   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02d2a37fd69e87f132bead07dbcc934335a7d774d844c276aae9fdf8602f8522"
	I0828 18:26:27.183651   76486 logs.go:123] Gathering logs for storage-provisioner [48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80] ...
	I0828 18:26:27.183680   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48533565061e0b75ccdb447766e12654c8414f5c2c79a9e1673906bd7a326f80"
	I0828 18:26:27.217641   76486 logs.go:123] Gathering logs for kubelet ...
	I0828 18:26:27.217666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:26:27.286627   76486 logs.go:123] Gathering logs for dmesg ...
	I0828 18:26:27.286666   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:26:27.300486   76486 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:26:27.300514   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:26:27.409150   76486 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:26:27.409183   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:26:27.791378   76486 logs.go:123] Gathering logs for container status ...
	I0828 18:26:27.791425   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:26:27.842764   76486 logs.go:123] Gathering logs for kube-apiserver [d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342] ...
	I0828 18:26:27.842799   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4b3a88fe2356ccec7286492979bba59fc9b9e38ccf20e2bfa0b8523d1699342"
	I0828 18:26:27.892361   76486 logs.go:123] Gathering logs for coredns [93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db] ...
	I0828 18:26:27.892393   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 93284522e6de62d3f920c898e15665141cf5aeb82836dbd185ab10031bd012db"
	I0828 18:26:27.926469   76486 logs.go:123] Gathering logs for kube-proxy [729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee] ...
	I0828 18:26:27.926497   76486 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 729f7a235e3df751896aa30983153a72b4ea6d3c3a05ec1334b0227f9cb237ee"
	I0828 18:26:30.478530   76486 system_pods.go:59] 8 kube-system pods found
	I0828 18:26:30.478568   76486 system_pods.go:61] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.478576   76486 system_pods.go:61] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.478583   76486 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.478589   76486 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.478595   76486 system_pods.go:61] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.478608   76486 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.478619   76486 system_pods.go:61] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.478627   76486 system_pods.go:61] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.478637   76486 system_pods.go:74] duration metric: took 3.79061533s to wait for pod list to return data ...
	I0828 18:26:30.478648   76486 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:30.482479   76486 default_sa.go:45] found service account: "default"
	I0828 18:26:30.482507   76486 default_sa.go:55] duration metric: took 3.852493ms for default service account to be created ...
	I0828 18:26:30.482517   76486 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:30.488974   76486 system_pods.go:86] 8 kube-system pods found
	I0828 18:26:30.489014   76486 system_pods.go:89] "coredns-6f6b679f8f-t5lx6" [63a7dcfb-266b-4eb2-bdfb-e8153da41df1] Running
	I0828 18:26:30.489023   76486 system_pods.go:89] "etcd-default-k8s-diff-port-640552" [dd1f7a08-c5e3-4e31-a6b1-5a90595acacd] Running
	I0828 18:26:30.489030   76486 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-640552" [54dd8d8b-f78d-4ee6-b675-d35e69a3848e] Running
	I0828 18:26:30.489038   76486 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-640552" [874d1c4e-0198-479c-9e97-f1ff4528b67b] Running
	I0828 18:26:30.489044   76486 system_pods.go:89] "kube-proxy-lmpft" [cddc57ae-4f38-4fd3-aa82-5552ba727d88] Running
	I0828 18:26:30.489050   76486 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-640552" [b40f077d-0899-405b-a11c-494ca154b212] Running
	I0828 18:26:30.489062   76486 system_pods.go:89] "metrics-server-6867b74b74-lccm2" [a8729f4d-7653-42f2-bcdc-0b95f4aa7080] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:30.489069   76486 system_pods.go:89] "storage-provisioner" [26468a47-d594-4b6c-823b-aea49a222f68] Running
	I0828 18:26:30.489092   76486 system_pods.go:126] duration metric: took 6.568035ms to wait for k8s-apps to be running ...
	I0828 18:26:30.489104   76486 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:30.489163   76486 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:30.508336   76486 system_svc.go:56] duration metric: took 19.222473ms WaitForService to wait for kubelet
	I0828 18:26:30.508369   76486 kubeadm.go:582] duration metric: took 4m23.39138334s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:30.508394   76486 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:30.512219   76486 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:30.512253   76486 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:30.512267   76486 node_conditions.go:105] duration metric: took 3.866556ms to run NodePressure ...
	I0828 18:26:30.512282   76486 start.go:241] waiting for startup goroutines ...
	I0828 18:26:30.512291   76486 start.go:246] waiting for cluster config update ...
	I0828 18:26:30.512306   76486 start.go:255] writing updated cluster config ...
	I0828 18:26:30.512681   76486 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:30.579402   76486 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:30.581444   76486 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-640552" cluster and "default" namespace by default
	I0828 18:26:28.575075   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:30.576207   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:29.206147   76435 out.go:235]   - Booting up control plane ...
	I0828 18:26:29.206257   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:26:29.206365   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:26:29.206494   76435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:26:29.227031   76435 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:26:29.235149   76435 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:26:29.235246   76435 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:26:29.370272   76435 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 18:26:29.370462   76435 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 18:26:29.872896   76435 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.733105ms
	I0828 18:26:29.872975   76435 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 18:26:34.877604   76435 kubeadm.go:310] [api-check] The API server is healthy after 5.002276684s
	I0828 18:26:34.892462   76435 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 18:26:34.905804   76435 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 18:26:34.932862   76435 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 18:26:34.933079   76435 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-014980 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 18:26:34.944560   76435 kubeadm.go:310] [bootstrap-token] Using token: nwgkdo.9yj47woyyi233z66
	I0828 18:26:34.945933   76435 out.go:235]   - Configuring RBAC rules ...
	I0828 18:26:34.946052   76435 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 18:26:34.951430   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 18:26:34.963862   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 18:26:34.968038   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 18:26:34.971350   76435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 18:26:34.977521   76435 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 18:26:35.282249   76435 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 18:26:35.704101   76435 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 18:26:36.282971   76435 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 18:26:36.284216   76435 kubeadm.go:310] 
	I0828 18:26:36.284337   76435 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 18:26:36.284364   76435 kubeadm.go:310] 
	I0828 18:26:36.284457   76435 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 18:26:36.284470   76435 kubeadm.go:310] 
	I0828 18:26:36.284504   76435 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 18:26:36.284579   76435 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 18:26:36.284654   76435 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 18:26:36.284667   76435 kubeadm.go:310] 
	I0828 18:26:36.284748   76435 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 18:26:36.284758   76435 kubeadm.go:310] 
	I0828 18:26:36.284820   76435 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 18:26:36.284826   76435 kubeadm.go:310] 
	I0828 18:26:36.284891   76435 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 18:26:36.284988   76435 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 18:26:36.285081   76435 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 18:26:36.285091   76435 kubeadm.go:310] 
	I0828 18:26:36.285197   76435 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 18:26:36.285298   76435 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 18:26:36.285309   76435 kubeadm.go:310] 
	I0828 18:26:36.285414   76435 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285549   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be \
	I0828 18:26:36.285572   76435 kubeadm.go:310] 	--control-plane 
	I0828 18:26:36.285577   76435 kubeadm.go:310] 
	I0828 18:26:36.285655   76435 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 18:26:36.285663   76435 kubeadm.go:310] 
	I0828 18:26:36.285757   76435 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nwgkdo.9yj47woyyi233z66 \
	I0828 18:26:36.285886   76435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:fd398a65953fde96046d75d62f3f5ac48e2b68edc3ead138b503d9e6f85b95be 
	I0828 18:26:36.287195   76435 kubeadm.go:310] W0828 18:26:28.113155    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287529   76435 kubeadm.go:310] W0828 18:26:28.114038    2520 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 18:26:36.287633   76435 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:26:36.287659   76435 cni.go:84] Creating CNI manager for ""
	I0828 18:26:36.287669   76435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 18:26:36.289019   76435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 18:26:33.075886   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:35.076651   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:36.290213   76435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 18:26:36.302171   76435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 18:26:36.326384   76435 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 18:26:36.326452   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:36.326522   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-014980 minikube.k8s.io/updated_at=2024_08_28T18_26_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=embed-certs-014980 minikube.k8s.io/primary=true
	I0828 18:26:36.537331   76435 ops.go:34] apiserver oom_adj: -16
	I0828 18:26:36.537497   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.038467   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:37.537529   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.038147   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:38.537854   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.038193   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:39.538325   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.037978   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:40.537503   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.038001   76435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 18:26:41.160327   76435 kubeadm.go:1113] duration metric: took 4.83392727s to wait for elevateKubeSystemPrivileges
	I0828 18:26:41.160366   76435 kubeadm.go:394] duration metric: took 5m2.080700509s to StartCluster
	I0828 18:26:41.160386   76435 settings.go:142] acquiring lock: {Name:mk3afba82958e55ab84b290ff871ac7f5c78daba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.160469   76435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:26:41.162122   76435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19529-10317/kubeconfig: {Name:mk03e55b5969e0208d9bb492cf09c807b7446b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 18:26:41.162393   76435 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.130 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0828 18:26:41.162463   76435 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0828 18:26:41.162547   76435 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-014980"
	I0828 18:26:41.162563   76435 addons.go:69] Setting default-storageclass=true in profile "embed-certs-014980"
	I0828 18:26:41.162588   76435 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-014980"
	I0828 18:26:41.162586   76435 addons.go:69] Setting metrics-server=true in profile "embed-certs-014980"
	W0828 18:26:41.162599   76435 addons.go:243] addon storage-provisioner should already be in state true
	I0828 18:26:41.162610   76435 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-014980"
	I0828 18:26:41.162632   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162635   76435 addons.go:234] Setting addon metrics-server=true in "embed-certs-014980"
	W0828 18:26:41.162644   76435 addons.go:243] addon metrics-server should already be in state true
	I0828 18:26:41.162666   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.162612   76435 config.go:182] Loaded profile config "embed-certs-014980": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:26:41.163042   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163054   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163083   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163095   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.163140   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.163160   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.164216   76435 out.go:177] * Verifying Kubernetes components...
	I0828 18:26:41.166298   76435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 18:26:41.178807   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34579
	I0828 18:26:41.178914   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0828 18:26:41.179437   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179515   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.179971   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.179994   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180168   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.180197   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.180346   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180629   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.180982   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181021   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.181761   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.181810   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.182920   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
	I0828 18:26:41.183394   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.183877   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.183900   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.184252   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.184450   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.187788   76435 addons.go:234] Setting addon default-storageclass=true in "embed-certs-014980"
	W0828 18:26:41.187811   76435 addons.go:243] addon default-storageclass should already be in state true
	I0828 18:26:41.187837   76435 host.go:66] Checking if "embed-certs-014980" exists ...
	I0828 18:26:41.188210   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.188242   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.199469   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44065
	I0828 18:26:41.199977   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.200461   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.200487   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.200894   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.201121   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.201369   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40293
	I0828 18:26:41.201749   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.202224   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.202243   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.202811   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.203024   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.203030   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.205127   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.205217   76435 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 18:26:41.206606   76435 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.206620   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 18:26:41.206633   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.206678   76435 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0828 18:26:37.575308   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:39.575726   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:41.207928   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 18:26:41.207951   76435 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 18:26:41.207971   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.208651   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I0828 18:26:41.209208   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.210020   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.210040   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.210477   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.210537   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211056   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211089   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211123   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211166   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211313   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.211443   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.211572   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.211588   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.211580   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.211600   76435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 18:26:41.211636   76435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 18:26:41.211828   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.211996   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.212159   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.212271   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.228122   76435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
	I0828 18:26:41.228552   76435 main.go:141] libmachine: () Calling .GetVersion
	I0828 18:26:41.229000   76435 main.go:141] libmachine: Using API Version  1
	I0828 18:26:41.229016   76435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 18:26:41.229309   76435 main.go:141] libmachine: () Calling .GetMachineName
	I0828 18:26:41.229565   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetState
	I0828 18:26:41.231484   76435 main.go:141] libmachine: (embed-certs-014980) Calling .DriverName
	I0828 18:26:41.231721   76435 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.231732   76435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 18:26:41.231744   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHHostname
	I0828 18:26:41.234525   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.234901   76435 main.go:141] libmachine: (embed-certs-014980) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:61:8f", ip: ""} in network mk-embed-certs-014980: {Iface:virbr2 ExpiryTime:2024-08-28 19:21:24 +0000 UTC Type:0 Mac:52:54:00:4c:61:8f Iaid: IPaddr:192.168.72.130 Prefix:24 Hostname:embed-certs-014980 Clientid:01:52:54:00:4c:61:8f}
	I0828 18:26:41.234933   76435 main.go:141] libmachine: (embed-certs-014980) DBG | domain embed-certs-014980 has defined IP address 192.168.72.130 and MAC address 52:54:00:4c:61:8f in network mk-embed-certs-014980
	I0828 18:26:41.235097   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHPort
	I0828 18:26:41.235259   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHKeyPath
	I0828 18:26:41.235412   76435 main.go:141] libmachine: (embed-certs-014980) Calling .GetSSHUsername
	I0828 18:26:41.235585   76435 sshutil.go:53] new ssh client: &{IP:192.168.72.130 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/embed-certs-014980/id_rsa Username:docker}
	I0828 18:26:41.375620   76435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 18:26:41.420534   76435 node_ready.go:35] waiting up to 6m0s for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429069   76435 node_ready.go:49] node "embed-certs-014980" has status "Ready":"True"
	I0828 18:26:41.429090   76435 node_ready.go:38] duration metric: took 8.530462ms for node "embed-certs-014980" to be "Ready" ...
	I0828 18:26:41.429098   76435 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:41.438842   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:41.484936   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 18:26:41.535672   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 18:26:41.536914   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 18:26:41.536936   76435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0828 18:26:41.604181   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 18:26:41.604219   76435 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 18:26:41.654668   76435 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.654695   76435 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 18:26:41.688039   76435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 18:26:41.921155   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921188   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921465   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:41.921544   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.921568   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.921577   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.921842   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.921863   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:41.938676   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:41.938694   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:41.938984   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:41.939034   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690412   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154689373s)
	I0828 18:26:42.690461   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690469   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.690766   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.690810   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.690830   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.690843   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.691076   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.691114   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.691122   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.722795   76435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.034719218s)
	I0828 18:26:42.722840   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.722852   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723141   76435 main.go:141] libmachine: (embed-certs-014980) DBG | Closing plugin on server side
	I0828 18:26:42.723210   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723231   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723249   76435 main.go:141] libmachine: Making call to close driver server
	I0828 18:26:42.723261   76435 main.go:141] libmachine: (embed-certs-014980) Calling .Close
	I0828 18:26:42.723539   76435 main.go:141] libmachine: Successfully made call to close driver server
	I0828 18:26:42.723556   76435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0828 18:26:42.723567   76435 addons.go:475] Verifying addon metrics-server=true in "embed-certs-014980"
	I0828 18:26:42.725524   76435 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0828 18:26:42.726507   76435 addons.go:510] duration metric: took 1.564045136s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0828 18:26:41.576259   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:44.075008   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:46.075323   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:43.445262   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:45.445672   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:47.948313   76435 pod_ready.go:103] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:48.446506   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.446527   76435 pod_ready.go:82] duration metric: took 7.007660638s for pod "coredns-6f6b679f8f-cz29x" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.446538   76435 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451954   76435 pod_ready.go:93] pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.451973   76435 pod_ready.go:82] duration metric: took 5.430099ms for pod "coredns-6f6b679f8f-djjbq" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.451983   76435 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456910   76435 pod_ready.go:93] pod "etcd-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:48.456937   76435 pod_ready.go:82] duration metric: took 4.947692ms for pod "etcd-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:48.456948   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963231   76435 pod_ready.go:93] pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.963252   76435 pod_ready.go:82] duration metric: took 1.506296167s for pod "kube-apiserver-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.963262   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967762   76435 pod_ready.go:93] pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:49.967780   76435 pod_ready.go:82] duration metric: took 4.511839ms for pod "kube-controller-manager-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:49.967788   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043820   76435 pod_ready.go:93] pod "kube-proxy-hzw4m" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.043844   76435 pod_ready.go:82] duration metric: took 76.049661ms for pod "kube-proxy-hzw4m" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.043855   76435 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443261   76435 pod_ready.go:93] pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace has status "Ready":"True"
	I0828 18:26:50.443288   76435 pod_ready.go:82] duration metric: took 399.423823ms for pod "kube-scheduler-embed-certs-014980" in "kube-system" namespace to be "Ready" ...
	I0828 18:26:50.443298   76435 pod_ready.go:39] duration metric: took 9.014190636s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:26:50.443315   76435 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:26:50.443375   76435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:26:50.459400   76435 api_server.go:72] duration metric: took 9.296966752s to wait for apiserver process to appear ...
	I0828 18:26:50.459426   76435 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:26:50.459448   76435 api_server.go:253] Checking apiserver healthz at https://192.168.72.130:8443/healthz ...
	I0828 18:26:50.463861   76435 api_server.go:279] https://192.168.72.130:8443/healthz returned 200:
	ok
	I0828 18:26:50.464779   76435 api_server.go:141] control plane version: v1.31.0
	I0828 18:26:50.464807   76435 api_server.go:131] duration metric: took 5.370633ms to wait for apiserver health ...
	I0828 18:26:50.464817   76435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:26:50.645588   76435 system_pods.go:59] 9 kube-system pods found
	I0828 18:26:50.645620   76435 system_pods.go:61] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:50.645626   76435 system_pods.go:61] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:50.645629   76435 system_pods.go:61] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:50.645633   76435 system_pods.go:61] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:50.645636   76435 system_pods.go:61] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:50.645639   76435 system_pods.go:61] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:50.645642   76435 system_pods.go:61] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:50.645647   76435 system_pods.go:61] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:50.645651   76435 system_pods.go:61] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:50.645658   76435 system_pods.go:74] duration metric: took 180.831741ms to wait for pod list to return data ...
	I0828 18:26:50.645664   76435 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:26:50.844171   76435 default_sa.go:45] found service account: "default"
	I0828 18:26:50.844205   76435 default_sa.go:55] duration metric: took 198.534118ms for default service account to be created ...
	I0828 18:26:50.844217   76435 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:26:51.045810   76435 system_pods.go:86] 9 kube-system pods found
	I0828 18:26:51.045839   76435 system_pods.go:89] "coredns-6f6b679f8f-cz29x" [fd89ac5c-011e-4810-b681-fae999af2b6b] Running
	I0828 18:26:51.045844   76435 system_pods.go:89] "coredns-6f6b679f8f-djjbq" [ec3e4fc9-c257-40c5-bee2-6ad7335e8bf8] Running
	I0828 18:26:51.045848   76435 system_pods.go:89] "etcd-embed-certs-014980" [415fcb58-f454-46c4-a81e-1c0378f61505] Running
	I0828 18:26:51.045852   76435 system_pods.go:89] "kube-apiserver-embed-certs-014980" [7a299e73-77a7-470c-927d-883608b1a124] Running
	I0828 18:26:51.045856   76435 system_pods.go:89] "kube-controller-manager-embed-certs-014980" [6e52bbd5-4534-4117-9318-3ef1ae67675f] Running
	I0828 18:26:51.045859   76435 system_pods.go:89] "kube-proxy-hzw4m" [b46e7805-0395-40ae-92e6-ab43eb4b2b2b] Running
	I0828 18:26:51.045865   76435 system_pods.go:89] "kube-scheduler-embed-certs-014980" [9f465a71-a0ee-4179-a5bb-e1663fb5d54f] Running
	I0828 18:26:51.045871   76435 system_pods.go:89] "metrics-server-6867b74b74-7nkmb" [bd303839-96c1-4e38-b7cb-2e66ba627a69] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:26:51.045874   76435 system_pods.go:89] "storage-provisioner" [c9e09413-b695-420e-bf45-1f8f40ff7d05] Running
	I0828 18:26:51.045882   76435 system_pods.go:126] duration metric: took 201.659747ms to wait for k8s-apps to be running ...
	I0828 18:26:51.045889   76435 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:26:51.045930   76435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:26:51.060123   76435 system_svc.go:56] duration metric: took 14.22252ms WaitForService to wait for kubelet
	I0828 18:26:51.060159   76435 kubeadm.go:582] duration metric: took 9.897729666s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:26:51.060184   76435 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:26:51.244017   76435 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:26:51.244042   76435 node_conditions.go:123] node cpu capacity is 2
	I0828 18:26:51.244052   76435 node_conditions.go:105] duration metric: took 183.862561ms to run NodePressure ...
	I0828 18:26:51.244063   76435 start.go:241] waiting for startup goroutines ...
	I0828 18:26:51.244069   76435 start.go:246] waiting for cluster config update ...
	I0828 18:26:51.244080   76435 start.go:255] writing updated cluster config ...
	I0828 18:26:51.244398   76435 ssh_runner.go:195] Run: rm -f paused
	I0828 18:26:51.291241   76435 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:26:51.293227   76435 out.go:177] * Done! kubectl is now configured to use "embed-certs-014980" cluster and "default" namespace by default
	I0828 18:26:48.075513   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:50.576810   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:53.075100   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:55.075381   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:57.076055   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:26:59.575251   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:01.575306   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:04.075576   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.076392   75908 pod_ready.go:103] pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace has status "Ready":"False"
	I0828 18:27:06.575514   75908 pod_ready.go:82] duration metric: took 4m0.006537109s for pod "metrics-server-6867b74b74-d5x89" in "kube-system" namespace to be "Ready" ...
	E0828 18:27:06.575539   75908 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0828 18:27:06.575549   75908 pod_ready.go:39] duration metric: took 4m3.208242253s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 18:27:06.575566   75908 api_server.go:52] waiting for apiserver process to appear ...
	I0828 18:27:06.575596   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:06.575649   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:06.625222   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:06.625247   75908 cri.go:89] found id: ""
	I0828 18:27:06.625257   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:06.625317   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.629941   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:06.630003   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:06.665372   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:06.665400   75908 cri.go:89] found id: ""
	I0828 18:27:06.665410   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:06.665472   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.669511   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:06.669599   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:06.709706   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:06.709734   75908 cri.go:89] found id: ""
	I0828 18:27:06.709742   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:06.709801   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.713964   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:06.714023   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:06.748110   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:06.748136   75908 cri.go:89] found id: ""
	I0828 18:27:06.748158   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:06.748217   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.752020   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:06.752087   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:06.788455   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:06.788476   75908 cri.go:89] found id: ""
	I0828 18:27:06.788483   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:06.788537   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.792710   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:06.792779   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:06.830031   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:06.830055   75908 cri.go:89] found id: ""
	I0828 18:27:06.830065   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:06.830147   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.833910   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:06.833970   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:06.869172   75908 cri.go:89] found id: ""
	I0828 18:27:06.869199   75908 logs.go:276] 0 containers: []
	W0828 18:27:06.869210   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:06.869217   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:06.869281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:06.906605   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:06.906626   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:06.906632   75908 cri.go:89] found id: ""
	I0828 18:27:06.906644   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:06.906705   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.911374   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:06.915494   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:06.915515   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:06.961094   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:06.961128   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:07.018511   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:07.018543   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:07.058413   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:07.058443   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:07.098028   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:07.098055   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:07.136706   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:07.136731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:07.203021   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:07.203059   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:07.239714   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:07.239758   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:07.746282   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:07.746326   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:07.812731   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:07.812771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:07.828453   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:07.828484   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:07.967513   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:07.967610   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:08.013719   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:08.013745   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.553418   75908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 18:27:10.569945   75908 api_server.go:72] duration metric: took 4m14.476728398s to wait for apiserver process to appear ...
	I0828 18:27:10.569977   75908 api_server.go:88] waiting for apiserver healthz status ...
	I0828 18:27:10.570010   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:10.570057   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:10.605869   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:10.605899   75908 cri.go:89] found id: ""
	I0828 18:27:10.605908   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:10.606013   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.609868   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:10.609949   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:10.647627   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:10.647655   75908 cri.go:89] found id: ""
	I0828 18:27:10.647664   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:10.647721   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.651916   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:10.651980   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:10.690782   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:10.690805   75908 cri.go:89] found id: ""
	I0828 18:27:10.690815   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:10.690870   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.694896   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:10.694944   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:10.735502   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:10.735530   75908 cri.go:89] found id: ""
	I0828 18:27:10.735541   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:10.735603   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.739627   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:10.739702   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:10.776213   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:10.776233   75908 cri.go:89] found id: ""
	I0828 18:27:10.776240   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:10.776293   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.779889   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:10.779948   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:10.815919   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:10.815949   75908 cri.go:89] found id: ""
	I0828 18:27:10.815958   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:10.816022   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.820317   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:10.820385   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:10.859049   75908 cri.go:89] found id: ""
	I0828 18:27:10.859077   75908 logs.go:276] 0 containers: []
	W0828 18:27:10.859085   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:10.859091   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:10.859138   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:10.894511   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:10.894543   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.894549   75908 cri.go:89] found id: ""
	I0828 18:27:10.894558   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:10.894616   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.899725   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:10.907315   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:10.907339   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:10.941374   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:10.941401   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:11.372069   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:11.372111   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:11.425168   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:11.425192   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:11.439748   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:11.439771   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:11.484252   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:11.484278   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:11.522975   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:11.523000   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:11.590753   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:11.590797   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:11.629694   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:11.629725   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:11.667597   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:11.667627   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:11.732423   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:11.732469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:11.841885   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:11.841929   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:11.885703   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:11.885741   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.428276   75908 api_server.go:253] Checking apiserver healthz at https://192.168.61.138:8443/healthz ...
	I0828 18:27:14.433359   75908 api_server.go:279] https://192.168.61.138:8443/healthz returned 200:
	ok
	I0828 18:27:14.434430   75908 api_server.go:141] control plane version: v1.31.0
	I0828 18:27:14.434448   75908 api_server.go:131] duration metric: took 3.864464723s to wait for apiserver health ...
	I0828 18:27:14.434458   75908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 18:27:14.434487   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:27:14.434545   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:27:14.472125   75908 cri.go:89] found id: "2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.472153   75908 cri.go:89] found id: ""
	I0828 18:27:14.472163   75908 logs.go:276] 1 containers: [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83]
	I0828 18:27:14.472225   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.476217   75908 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:27:14.476281   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:27:14.514886   75908 cri.go:89] found id: "701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:14.514904   75908 cri.go:89] found id: ""
	I0828 18:27:14.514911   75908 logs.go:276] 1 containers: [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb]
	I0828 18:27:14.514965   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.518930   75908 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:27:14.519000   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:27:14.556279   75908 cri.go:89] found id: "b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:14.556302   75908 cri.go:89] found id: ""
	I0828 18:27:14.556311   75908 logs.go:276] 1 containers: [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9]
	I0828 18:27:14.556356   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.560542   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:27:14.560612   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:27:14.604981   75908 cri.go:89] found id: "5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:14.605008   75908 cri.go:89] found id: ""
	I0828 18:27:14.605017   75908 logs.go:276] 1 containers: [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64]
	I0828 18:27:14.605076   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.608769   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:27:14.608833   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:27:14.644014   75908 cri.go:89] found id: "f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:14.644036   75908 cri.go:89] found id: ""
	I0828 18:27:14.644044   75908 logs.go:276] 1 containers: [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7]
	I0828 18:27:14.644089   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.648138   75908 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:27:14.648211   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:27:14.686898   75908 cri.go:89] found id: "4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:14.686919   75908 cri.go:89] found id: ""
	I0828 18:27:14.686926   75908 logs.go:276] 1 containers: [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4]
	I0828 18:27:14.686971   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.690752   75908 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:27:14.690818   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:27:14.723146   75908 cri.go:89] found id: ""
	I0828 18:27:14.723174   75908 logs.go:276] 0 containers: []
	W0828 18:27:14.723185   75908 logs.go:278] No container was found matching "kindnet"
	I0828 18:27:14.723200   75908 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0828 18:27:14.723264   75908 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0828 18:27:14.758168   75908 cri.go:89] found id: "176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.758196   75908 cri.go:89] found id: "851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:14.758202   75908 cri.go:89] found id: ""
	I0828 18:27:14.758212   75908 logs.go:276] 2 containers: [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a]
	I0828 18:27:14.758269   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.761928   75908 ssh_runner.go:195] Run: which crictl
	I0828 18:27:14.765388   75908 logs.go:123] Gathering logs for storage-provisioner [176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb] ...
	I0828 18:27:14.765407   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 176a416d0685ea1c0baba17afbf791ea4966a14a9a040167abf026fa4f50e4eb"
	I0828 18:27:14.798567   75908 logs.go:123] Gathering logs for container status ...
	I0828 18:27:14.798598   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:27:14.841992   75908 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:27:14.842024   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0828 18:27:14.947020   75908 logs.go:123] Gathering logs for kube-apiserver [2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83] ...
	I0828 18:27:14.947050   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cb32118555695bf682f7d5c46e649193ea0ca6f143e140eb4752a9cd047be83"
	I0828 18:27:14.996788   75908 logs.go:123] Gathering logs for coredns [b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9] ...
	I0828 18:27:14.996815   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b670cbb724f621188915c8d78fed14dfded54bd8be63477c54e07bf5f60b39f9"
	I0828 18:27:15.031706   75908 logs.go:123] Gathering logs for kube-scheduler [5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64] ...
	I0828 18:27:15.031731   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5eb6f94089b127c6d24af68a1f83c2a6e55fedb0e76f7d08adf3056d32c1ce64"
	I0828 18:27:15.065813   75908 logs.go:123] Gathering logs for kube-controller-manager [4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4] ...
	I0828 18:27:15.065839   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4be517729ec137364ba38a962d19b173cbb86e715325b131897c196233d551a4"
	I0828 18:27:15.121439   75908 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:27:15.121469   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:27:15.535661   75908 logs.go:123] Gathering logs for kubelet ...
	I0828 18:27:15.535709   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0828 18:27:15.603334   75908 logs.go:123] Gathering logs for dmesg ...
	I0828 18:27:15.603374   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:27:15.619628   75908 logs.go:123] Gathering logs for etcd [701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb] ...
	I0828 18:27:15.619657   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 701d65f0dbe97c02e08403d746ecda8bab0ad78d4f4d031f57b8fa9e15151ecb"
	I0828 18:27:15.661179   75908 logs.go:123] Gathering logs for kube-proxy [f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7] ...
	I0828 18:27:15.661203   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1e183b4b26b554d682c41c68ff4b34240058ad56084730b31fe64f2953135f7"
	I0828 18:27:15.697954   75908 logs.go:123] Gathering logs for storage-provisioner [851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a] ...
	I0828 18:27:15.697983   75908 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 851b142e4bcda282c35cf2a73a606295ea14f64723dba727a558bfc753aefd1a"
	I0828 18:27:18.238105   75908 system_pods.go:59] 8 kube-system pods found
	I0828 18:27:18.238137   75908 system_pods.go:61] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.238144   75908 system_pods.go:61] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.238149   75908 system_pods.go:61] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.238154   75908 system_pods.go:61] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.238158   75908 system_pods.go:61] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.238163   75908 system_pods.go:61] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.238171   75908 system_pods.go:61] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.238177   75908 system_pods.go:61] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.238187   75908 system_pods.go:74] duration metric: took 3.803722719s to wait for pod list to return data ...
	I0828 18:27:18.238198   75908 default_sa.go:34] waiting for default service account to be created ...
	I0828 18:27:18.240936   75908 default_sa.go:45] found service account: "default"
	I0828 18:27:18.240955   75908 default_sa.go:55] duration metric: took 2.749733ms for default service account to be created ...
	I0828 18:27:18.240963   75908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 18:27:18.245768   75908 system_pods.go:86] 8 kube-system pods found
	I0828 18:27:18.245793   75908 system_pods.go:89] "coredns-6f6b679f8f-fjclq" [3279bcbb-5b7f-464a-a6d0-4206b877065b] Running
	I0828 18:27:18.245800   75908 system_pods.go:89] "etcd-no-preload-072854" [5ce506ac-8767-43f4-961d-28267e9f82de] Running
	I0828 18:27:18.245806   75908 system_pods.go:89] "kube-apiserver-no-preload-072854" [b90a2300-1f19-42fa-b0bf-a4d08e01ac74] Running
	I0828 18:27:18.245810   75908 system_pods.go:89] "kube-controller-manager-no-preload-072854" [264396af-c066-4a4a-a520-bdec7c6c492d] Running
	I0828 18:27:18.245815   75908 system_pods.go:89] "kube-proxy-tfxfd" [a136ed96-1b09-43d2-9471-fdc7f17f5760] Running
	I0828 18:27:18.245820   75908 system_pods.go:89] "kube-scheduler-no-preload-072854" [cca23631-79ed-4dfb-9f5f-ca438d6dfdbc] Running
	I0828 18:27:18.245829   75908 system_pods.go:89] "metrics-server-6867b74b74-d5x89" [2f77d1e5-7779-46f9-881d-ff1a6a25098e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 18:27:18.245838   75908 system_pods.go:89] "storage-provisioner" [0fdf9f52-ebdf-4ab6-8f34-1e773a4409df] Running
	I0828 18:27:18.245851   75908 system_pods.go:126] duration metric: took 4.881291ms to wait for k8s-apps to be running ...
	I0828 18:27:18.245862   75908 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 18:27:18.245909   75908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:27:18.260429   75908 system_svc.go:56] duration metric: took 14.56108ms WaitForService to wait for kubelet
	I0828 18:27:18.260458   75908 kubeadm.go:582] duration metric: took 4m22.167245383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 18:27:18.260489   75908 node_conditions.go:102] verifying NodePressure condition ...
	I0828 18:27:18.262765   75908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0828 18:27:18.262784   75908 node_conditions.go:123] node cpu capacity is 2
	I0828 18:27:18.262793   75908 node_conditions.go:105] duration metric: took 2.299468ms to run NodePressure ...
	I0828 18:27:18.262803   75908 start.go:241] waiting for startup goroutines ...
	I0828 18:27:18.262810   75908 start.go:246] waiting for cluster config update ...
	I0828 18:27:18.262820   75908 start.go:255] writing updated cluster config ...
	I0828 18:27:18.263070   75908 ssh_runner.go:195] Run: rm -f paused
	I0828 18:27:18.312755   75908 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0828 18:27:18.314827   75908 out.go:177] * Done! kubectl is now configured to use "no-preload-072854" cluster and "default" namespace by default
	I0828 18:28:25.556329   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:28:25.556449   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:28:25.558031   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:28:25.558117   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:28:25.558222   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:28:25.558363   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:28:25.558517   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:28:25.558594   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:28:25.561046   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:28:25.561124   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:28:25.561179   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:28:25.561288   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:28:25.561384   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:28:25.561489   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:28:25.561562   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:28:25.561797   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:28:25.561914   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:28:25.562010   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:28:25.562230   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:28:25.562294   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:28:25.562402   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:28:25.562478   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:28:25.562554   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:28:25.562706   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:28:25.562818   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:28:25.562926   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:28:25.563006   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:28:25.563043   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:28:25.563144   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:28:25.564527   77396 out.go:235]   - Booting up control plane ...
	I0828 18:28:25.564629   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:28:25.564716   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:28:25.564816   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:28:25.564929   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:28:25.565154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:28:25.565226   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:28:25.565326   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565541   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.565660   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.565895   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566002   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566184   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566245   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566411   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566473   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:28:25.566629   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:28:25.566636   77396 kubeadm.go:310] 
	I0828 18:28:25.566672   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:28:25.566706   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:28:25.566712   77396 kubeadm.go:310] 
	I0828 18:28:25.566740   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:28:25.566769   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:28:25.566881   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:28:25.566893   77396 kubeadm.go:310] 
	I0828 18:28:25.567033   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:28:25.567080   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:28:25.567126   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:28:25.567142   77396 kubeadm.go:310] 
	I0828 18:28:25.567276   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:28:25.567351   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:28:25.567358   77396 kubeadm.go:310] 
	I0828 18:28:25.567461   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:28:25.567534   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:28:25.567612   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:28:25.567689   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:28:25.567726   77396 kubeadm.go:310] 
	W0828 18:28:25.567820   77396 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0828 18:28:25.567858   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0828 18:28:26.036779   77396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 18:28:26.051771   77396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 18:28:26.060912   77396 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 18:28:26.060932   77396 kubeadm.go:157] found existing configuration files:
	
	I0828 18:28:26.060971   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 18:28:26.069420   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 18:28:26.069486   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 18:28:26.078268   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 18:28:26.086594   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 18:28:26.086669   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 18:28:26.095756   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.104747   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 18:28:26.104809   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 18:28:26.113847   77396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 18:28:26.122600   77396 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 18:28:26.122673   77396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 18:28:26.131697   77396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0828 18:28:26.338828   77396 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 18:30:22.315132   77396 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0828 18:30:22.315271   77396 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0828 18:30:22.316887   77396 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0828 18:30:22.316970   77396 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 18:30:22.317067   77396 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 18:30:22.317199   77396 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 18:30:22.317289   77396 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0828 18:30:22.317340   77396 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 18:30:22.319318   77396 out.go:235]   - Generating certificates and keys ...
	I0828 18:30:22.319406   77396 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 18:30:22.319461   77396 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 18:30:22.319540   77396 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0828 18:30:22.319620   77396 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0828 18:30:22.319715   77396 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0828 18:30:22.319791   77396 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0828 18:30:22.319888   77396 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0828 18:30:22.319972   77396 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0828 18:30:22.320068   77396 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0828 18:30:22.320161   77396 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0828 18:30:22.320232   77396 kubeadm.go:310] [certs] Using the existing "sa" key
	I0828 18:30:22.320312   77396 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 18:30:22.320362   77396 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 18:30:22.320411   77396 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 18:30:22.320468   77396 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 18:30:22.320511   77396 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 18:30:22.320627   77396 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 18:30:22.320748   77396 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 18:30:22.320805   77396 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 18:30:22.320922   77396 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 18:30:22.322522   77396 out.go:235]   - Booting up control plane ...
	I0828 18:30:22.322640   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 18:30:22.322739   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 18:30:22.322843   77396 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 18:30:22.322939   77396 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 18:30:22.323154   77396 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0828 18:30:22.323234   77396 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0828 18:30:22.323320   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323518   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323616   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.323851   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.323947   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324157   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324215   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324383   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324448   77396 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0828 18:30:22.324605   77396 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0828 18:30:22.324614   77396 kubeadm.go:310] 
	I0828 18:30:22.324651   77396 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0828 18:30:22.324685   77396 kubeadm.go:310] 		timed out waiting for the condition
	I0828 18:30:22.324694   77396 kubeadm.go:310] 
	I0828 18:30:22.324726   77396 kubeadm.go:310] 	This error is likely caused by:
	I0828 18:30:22.324755   77396 kubeadm.go:310] 		- The kubelet is not running
	I0828 18:30:22.324846   77396 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0828 18:30:22.324853   77396 kubeadm.go:310] 
	I0828 18:30:22.324939   77396 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0828 18:30:22.324971   77396 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0828 18:30:22.325003   77396 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0828 18:30:22.325009   77396 kubeadm.go:310] 
	I0828 18:30:22.325137   77396 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0828 18:30:22.325259   77396 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0828 18:30:22.325271   77396 kubeadm.go:310] 
	I0828 18:30:22.325394   77396 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0828 18:30:22.325485   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0828 18:30:22.325599   77396 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0828 18:30:22.325707   77396 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0828 18:30:22.325725   77396 kubeadm.go:310] 
	I0828 18:30:22.325793   77396 kubeadm.go:394] duration metric: took 8m1.985321645s to StartCluster
	I0828 18:30:22.325845   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0828 18:30:22.325912   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0828 18:30:22.369637   77396 cri.go:89] found id: ""
	I0828 18:30:22.369669   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.369680   77396 logs.go:278] No container was found matching "kube-apiserver"
	I0828 18:30:22.369687   77396 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0828 18:30:22.369748   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0828 18:30:22.404363   77396 cri.go:89] found id: ""
	I0828 18:30:22.404395   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.404404   77396 logs.go:278] No container was found matching "etcd"
	I0828 18:30:22.404412   77396 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0828 18:30:22.404477   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0828 18:30:22.439923   77396 cri.go:89] found id: ""
	I0828 18:30:22.439949   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.439956   77396 logs.go:278] No container was found matching "coredns"
	I0828 18:30:22.439962   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0828 18:30:22.440016   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0828 18:30:22.480139   77396 cri.go:89] found id: ""
	I0828 18:30:22.480169   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.480186   77396 logs.go:278] No container was found matching "kube-scheduler"
	I0828 18:30:22.480195   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0828 18:30:22.480255   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0828 18:30:22.517020   77396 cri.go:89] found id: ""
	I0828 18:30:22.517053   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.517064   77396 logs.go:278] No container was found matching "kube-proxy"
	I0828 18:30:22.517075   77396 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0828 18:30:22.517151   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0828 18:30:22.551369   77396 cri.go:89] found id: ""
	I0828 18:30:22.551391   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.551399   77396 logs.go:278] No container was found matching "kube-controller-manager"
	I0828 18:30:22.551409   77396 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0828 18:30:22.551458   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0828 18:30:22.585656   77396 cri.go:89] found id: ""
	I0828 18:30:22.585686   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.585697   77396 logs.go:278] No container was found matching "kindnet"
	I0828 18:30:22.585704   77396 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0828 18:30:22.585781   77396 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0828 18:30:22.620157   77396 cri.go:89] found id: ""
	I0828 18:30:22.620190   77396 logs.go:276] 0 containers: []
	W0828 18:30:22.620201   77396 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0828 18:30:22.620212   77396 logs.go:123] Gathering logs for dmesg ...
	I0828 18:30:22.620230   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0828 18:30:22.634209   77396 logs.go:123] Gathering logs for describe nodes ...
	I0828 18:30:22.634245   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0828 18:30:22.711047   77396 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0828 18:30:22.711082   77396 logs.go:123] Gathering logs for CRI-O ...
	I0828 18:30:22.711096   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0828 18:30:22.816037   77396 logs.go:123] Gathering logs for container status ...
	I0828 18:30:22.816075   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0828 18:30:22.885999   77396 logs.go:123] Gathering logs for kubelet ...
	I0828 18:30:22.886029   77396 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0828 18:30:22.936793   77396 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0828 18:30:22.936856   77396 out.go:270] * 
	W0828 18:30:22.936920   77396 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.936941   77396 out.go:270] * 
	W0828 18:30:22.937749   77396 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 18:30:22.941026   77396 out.go:201] 
	W0828 18:30:22.942189   77396 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0828 18:30:22.942300   77396 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0828 18:30:22.942335   77396 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0828 18:30:22.943829   77396 out.go:201] 
	
	
	==> CRI-O <==
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.664023422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870502664001225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cb2a16e-61e7-4d8c-8f1b-c045c7a88bdd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.664614664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=73c23aa3-3db4-4421-9398-85dbac27830f name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.664689080Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=73c23aa3-3db4-4421-9398-85dbac27830f name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.664725037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=73c23aa3-3db4-4421-9398-85dbac27830f name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.693777607Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69c84806-8267-4d97-a2fd-ebd889d9e4fe name=/runtime.v1.RuntimeService/Version
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.693865630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69c84806-8267-4d97-a2fd-ebd889d9e4fe name=/runtime.v1.RuntimeService/Version
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.694832979Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a2b05054-12f2-42cd-a445-b362ac90039b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.695226689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870502695196500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a2b05054-12f2-42cd-a445-b362ac90039b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.695693636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f545f07-58ad-4234-a7ce-bbb86e2591a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.695741827Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f545f07-58ad-4234-a7ce-bbb86e2591a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.695777604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2f545f07-58ad-4234-a7ce-bbb86e2591a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.728741835Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee735830-7a59-40e8-a8f7-15cbe270a570 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.728824118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee735830-7a59-40e8-a8f7-15cbe270a570 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.729939842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86b791c8-7a7f-4991-82a2-17f38fd6059b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.730319531Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870502730294309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86b791c8-7a7f-4991-82a2-17f38fd6059b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.730960927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=166c6bd3-933f-4ad5-a881-56c472179fcd name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.731043439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=166c6bd3-933f-4ad5-a881-56c472179fcd name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.731093142Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=166c6bd3-933f-4ad5-a881-56c472179fcd name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.764895174Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a348636-5e9c-4ef3-bb48-d81a6958c8a6 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.764971822Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a348636-5e9c-4ef3-bb48-d81a6958c8a6 name=/runtime.v1.RuntimeService/Version
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.766252075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=030f813d-34fe-44ce-a4d5-67425b0ca39a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.766715104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1724870502766688730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=030f813d-34fe-44ce-a4d5-67425b0ca39a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.767217035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18e6569d-c320-492e-a498-185139653bc0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.767270716Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18e6569d-c320-492e-a498-185139653bc0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 28 18:41:42 old-k8s-version-131737 crio[633]: time="2024-08-28 18:41:42.767344954Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=18e6569d-c320-492e-a498-185139653bc0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug28 18:21] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053841] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.038492] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.861305] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug28 18:22] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.351947] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.186067] systemd-fstab-generator[559]: Ignoring "noauto" option for root device
	[  +0.056442] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067838] systemd-fstab-generator[571]: Ignoring "noauto" option for root device
	[  +0.210439] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.181798] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.238436] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.531745] systemd-fstab-generator[889]: Ignoring "noauto" option for root device
	[  +0.068173] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.717012] systemd-fstab-generator[1015]: Ignoring "noauto" option for root device
	[ +12.982776] kauditd_printk_skb: 46 callbacks suppressed
	[Aug28 18:26] systemd-fstab-generator[5132]: Ignoring "noauto" option for root device
	[Aug28 18:28] systemd-fstab-generator[5416]: Ignoring "noauto" option for root device
	[  +0.064360] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:41:42 up 19 min,  0 users,  load average: 0.00, 0.03, 0.06
	Linux old-k8s-version-131737 5.10.207 #1 SMP Tue Aug 27 20:49:29 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]: net.(*sysDialer).dialSerial(0xc000925480, 0x4f7fe40, 0xc000379c80, 0xc0009eae60, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]:         /usr/local/go/src/net/dial.go:548 +0x152
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]: net.(*Dialer).DialContext(0xc000123b00, 0x4f7fe00, 0xc00011e018, 0x48ab5d6, 0x3, 0xc0009cac60, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]:         /usr/local/go/src/net/dial.go:425 +0x6e5
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0008c8f00, 0x4f7fe00, 0xc00011e018, 0x48ab5d6, 0x3, 0xc0009cac60, 0x24, 0x60, 0x7fce882e4da8, 0x118, ...)
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]: net/http.(*Transport).dial(0xc000b04140, 0x4f7fe00, 0xc00011e018, 0x48ab5d6, 0x3, 0xc0009cac60, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]: net/http.(*Transport).dialConn(0xc000b04140, 0x4f7fe00, 0xc00011e018, 0x0, 0xc0009e7080, 0x5, 0xc0009cac60, 0x24, 0x0, 0xc00095afc0, ...)
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]: net/http.(*Transport).dialConnFor(0xc000b04140, 0xc000cbc000)
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]: created by net/http.(*Transport).queueForDial
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Aug 28 18:41:40 old-k8s-version-131737 kubelet[6910]: E0828 18:41:40.645171    6910 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.99:8443: connect: connection refused
	Aug 28 18:41:40 old-k8s-version-131737 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 28 18:41:40 old-k8s-version-131737 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 28 18:41:41 old-k8s-version-131737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 138.
	Aug 28 18:41:41 old-k8s-version-131737 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 28 18:41:41 old-k8s-version-131737 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 28 18:41:41 old-k8s-version-131737 kubelet[6928]: I0828 18:41:41.374761    6928 server.go:416] Version: v1.20.0
	Aug 28 18:41:41 old-k8s-version-131737 kubelet[6928]: I0828 18:41:41.375010    6928 server.go:837] Client rotation is on, will bootstrap in background
	Aug 28 18:41:41 old-k8s-version-131737 kubelet[6928]: I0828 18:41:41.376830    6928 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 28 18:41:41 old-k8s-version-131737 kubelet[6928]: W0828 18:41:41.377707    6928 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 28 18:41:41 old-k8s-version-131737 kubelet[6928]: I0828 18:41:41.377906    6928 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131737 -n old-k8s-version-131737
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 2 (217.898383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-131737" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (134.40s)

                                                
                                    

Test pass (252/318)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 28.82
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 12.23
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.6
22 TestOffline 115.21
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 139.67
31 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/parallel/InspektorGadget 12.12
37 TestAddons/parallel/HelmTiller 11.74
39 TestAddons/parallel/CSI 57.02
40 TestAddons/parallel/Headlamp 18.59
41 TestAddons/parallel/CloudSpanner 6.56
42 TestAddons/parallel/LocalPath 11.06
43 TestAddons/parallel/NvidiaDevicePlugin 6.48
44 TestAddons/parallel/Yakd 11.82
45 TestAddons/StoppedEnableDisable 92.69
46 TestCertOptions 61.5
47 TestCertExpiration 336.41
49 TestForceSystemdFlag 102.31
50 TestForceSystemdEnv 47.45
52 TestKVMDriverInstallOrUpdate 4.28
56 TestErrorSpam/setup 37.36
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.71
59 TestErrorSpam/pause 1.52
60 TestErrorSpam/unpause 1.62
61 TestErrorSpam/stop 5.07
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 48.5
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 34.41
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.87
73 TestFunctional/serial/CacheCmd/cache/add_local 2.1
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 33.23
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.35
84 TestFunctional/serial/LogsFileCmd 1.39
85 TestFunctional/serial/InvalidService 4.78
87 TestFunctional/parallel/ConfigCmd 0.29
88 TestFunctional/parallel/DashboardCmd 10.38
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 1.1
95 TestFunctional/parallel/ServiceCmdConnect 11.89
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 46.33
99 TestFunctional/parallel/SSHCmd 0.4
100 TestFunctional/parallel/CpCmd 1.25
101 TestFunctional/parallel/MySQL 30.42
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.35
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
111 TestFunctional/parallel/License 0.58
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
123 TestFunctional/parallel/ProfileCmd/profile_list 0.36
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
125 TestFunctional/parallel/MountCmd/any-port 8.37
126 TestFunctional/parallel/MountCmd/specific-port 1.77
127 TestFunctional/parallel/ServiceCmd/List 0.3
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.86
130 TestFunctional/parallel/MountCmd/VerifyCleanup 1.39
131 TestFunctional/parallel/ServiceCmd/Format 0.38
132 TestFunctional/parallel/ServiceCmd/URL 0.32
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
137 TestFunctional/parallel/ImageCommands/ImageBuild 4.49
138 TestFunctional/parallel/ImageCommands/Setup 1.78
139 TestFunctional/parallel/Version/short 0.05
140 TestFunctional/parallel/Version/components 0.49
141 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
142 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
143 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.72
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.87
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.63
148 TestFunctional/parallel/ImageCommands/ImageRemove 2.79
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.95
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 190.93
158 TestMultiControlPlane/serial/DeployApp 7.28
159 TestMultiControlPlane/serial/PingHostFromPods 1.15
160 TestMultiControlPlane/serial/AddWorkerNode 53.64
161 TestMultiControlPlane/serial/NodeLabels 0.07
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.28
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.45
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.39
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.42
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 279.96
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 79.75
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.51
179 TestJSONOutput/start/Command 80.81
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.65
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.59
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.33
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 84.26
211 TestMountStart/serial/StartWithMountFirst 31.32
212 TestMountStart/serial/VerifyMountFirst 0.37
213 TestMountStart/serial/StartWithMountSecond 28.12
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.64
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.27
218 TestMountStart/serial/RestartStopped 23.13
219 TestMountStart/serial/VerifyMountPostStop 0.36
222 TestMultiNode/serial/FreshStart2Nodes 108.7
223 TestMultiNode/serial/DeployApp2Nodes 5.7
224 TestMultiNode/serial/PingHostFrom2Pods 0.76
225 TestMultiNode/serial/AddNode 51.97
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.2
228 TestMultiNode/serial/CopyFile 6.92
229 TestMultiNode/serial/StopNode 2.17
230 TestMultiNode/serial/StartAfterStop 39.33
232 TestMultiNode/serial/DeleteNode 2.16
234 TestMultiNode/serial/RestartMultiNode 174.26
235 TestMultiNode/serial/ValidateNameConflict 42.75
242 TestScheduledStopUnix 111
246 TestRunningBinaryUpgrade 180.51
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
255 TestNoKubernetes/serial/StartWithK8s 96.37
260 TestNetworkPlugins/group/false 2.84
264 TestNoKubernetes/serial/StartWithStopK8s 45.66
265 TestNoKubernetes/serial/Start 45.17
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
267 TestNoKubernetes/serial/ProfileList 0.78
268 TestNoKubernetes/serial/Stop 1.28
269 TestNoKubernetes/serial/StartNoArgs 59.77
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
271 TestStoppedBinaryUpgrade/Setup 2.28
272 TestStoppedBinaryUpgrade/Upgrade 101.27
281 TestPause/serial/Start 51
282 TestNetworkPlugins/group/auto/Start 67.16
283 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
284 TestNetworkPlugins/group/kindnet/Start 87.08
285 TestPause/serial/SecondStartNoReconfiguration 57.31
286 TestNetworkPlugins/group/auto/KubeletFlags 0.21
287 TestNetworkPlugins/group/auto/NetCatPod 12.24
288 TestNetworkPlugins/group/auto/DNS 16.12
289 TestPause/serial/Pause 0.65
290 TestPause/serial/VerifyStatus 0.23
291 TestPause/serial/Unpause 0.64
292 TestPause/serial/PauseAgain 0.86
293 TestNetworkPlugins/group/auto/Localhost 0.13
294 TestPause/serial/DeletePaused 1.01
295 TestNetworkPlugins/group/auto/HairPin 0.13
296 TestPause/serial/VerifyDeletedResources 0.49
297 TestNetworkPlugins/group/calico/Start 83.81
298 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
300 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
301 TestNetworkPlugins/group/custom-flannel/Start 88.57
302 TestNetworkPlugins/group/kindnet/DNS 0.15
303 TestNetworkPlugins/group/kindnet/Localhost 0.17
304 TestNetworkPlugins/group/kindnet/HairPin 0.13
305 TestNetworkPlugins/group/enable-default-cni/Start 119.61
306 TestNetworkPlugins/group/calico/ControllerPod 6.01
307 TestNetworkPlugins/group/calico/KubeletFlags 0.22
308 TestNetworkPlugins/group/calico/NetCatPod 11.27
309 TestNetworkPlugins/group/calico/DNS 0.19
310 TestNetworkPlugins/group/calico/Localhost 0.15
311 TestNetworkPlugins/group/calico/HairPin 0.14
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.26
314 TestNetworkPlugins/group/custom-flannel/DNS 0.18
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
317 TestNetworkPlugins/group/flannel/Start 74.48
318 TestNetworkPlugins/group/bridge/Start 95.51
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
327 TestStartStop/group/no-preload/serial/FirstStart 88.42
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
330 TestNetworkPlugins/group/flannel/NetCatPod 13.37
331 TestNetworkPlugins/group/flannel/DNS 0.17
332 TestNetworkPlugins/group/flannel/Localhost 0.13
333 TestNetworkPlugins/group/flannel/HairPin 0.13
334 TestNetworkPlugins/group/bridge/KubeletFlags 0.62
335 TestNetworkPlugins/group/bridge/NetCatPod 11.26
337 TestStartStop/group/embed-certs/serial/FirstStart 80.93
338 TestNetworkPlugins/group/bridge/DNS 0.17
339 TestNetworkPlugins/group/bridge/Localhost 0.15
340 TestNetworkPlugins/group/bridge/HairPin 0.14
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.87
343 TestStartStop/group/no-preload/serial/DeployApp 12.29
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
346 TestStartStop/group/embed-certs/serial/DeployApp 10.27
347 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.27
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.92
350 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
353 TestStartStop/group/no-preload/serial/SecondStart 642.05
358 TestStartStop/group/embed-certs/serial/SecondStart 578.16
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 555.21
360 TestStartStop/group/old-k8s-version/serial/Stop 4.28
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
372 TestStartStop/group/newest-cni/serial/FirstStart 43.17
373 TestStartStop/group/newest-cni/serial/DeployApp 0
374 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
375 TestStartStop/group/newest-cni/serial/Stop 10.53
376 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
377 TestStartStop/group/newest-cni/serial/SecondStart 35.22
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
381 TestStartStop/group/newest-cni/serial/Pause 2.21
x
+
TestDownloadOnly/v1.20.0/json-events (28.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-238617 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-238617 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (28.816253307s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (28.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-238617
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-238617: exit status 85 (52.841347ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-238617 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |          |
	|         | -p download-only-238617        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 16:51:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 16:51:21.019512   17539 out.go:345] Setting OutFile to fd 1 ...
	I0828 16:51:21.019741   17539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:21.019749   17539 out.go:358] Setting ErrFile to fd 2...
	I0828 16:51:21.019754   17539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:21.019911   17539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	W0828 16:51:21.020017   17539 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19529-10317/.minikube/config/config.json: open /home/jenkins/minikube-integration/19529-10317/.minikube/config/config.json: no such file or directory
	I0828 16:51:21.020572   17539 out.go:352] Setting JSON to true
	I0828 16:51:21.021451   17539 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2027,"bootTime":1724861854,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 16:51:21.021508   17539 start.go:139] virtualization: kvm guest
	I0828 16:51:21.023756   17539 out.go:97] [download-only-238617] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0828 16:51:21.023855   17539 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball: no such file or directory
	I0828 16:51:21.023899   17539 notify.go:220] Checking for updates...
	I0828 16:51:21.025212   17539 out.go:169] MINIKUBE_LOCATION=19529
	I0828 16:51:21.026449   17539 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 16:51:21.027514   17539 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 16:51:21.028599   17539 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 16:51:21.029597   17539 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0828 16:51:21.031649   17539 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0828 16:51:21.031839   17539 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 16:51:21.137183   17539 out.go:97] Using the kvm2 driver based on user configuration
	I0828 16:51:21.137206   17539 start.go:297] selected driver: kvm2
	I0828 16:51:21.137218   17539 start.go:901] validating driver "kvm2" against <nil>
	I0828 16:51:21.137538   17539 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 16:51:21.137643   17539 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 16:51:21.152295   17539 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 16:51:21.152377   17539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 16:51:21.153057   17539 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0828 16:51:21.153246   17539 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 16:51:21.153283   17539 cni.go:84] Creating CNI manager for ""
	I0828 16:51:21.153296   17539 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 16:51:21.153304   17539 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 16:51:21.153386   17539 start.go:340] cluster config:
	{Name:download-only-238617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-238617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:51:21.153604   17539 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 16:51:21.155470   17539 out.go:97] Downloading VM boot image ...
	I0828 16:51:21.155513   17539 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19529-10317/.minikube/cache/iso/amd64/minikube-v1.33.1-1724775098-19521-amd64.iso
	I0828 16:51:36.062843   17539 out.go:97] Starting "download-only-238617" primary control-plane node in "download-only-238617" cluster
	I0828 16:51:36.062871   17539 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 16:51:36.162564   17539 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0828 16:51:36.162594   17539 cache.go:56] Caching tarball of preloaded images
	I0828 16:51:36.162725   17539 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0828 16:51:36.164342   17539 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0828 16:51:36.164355   17539 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0828 16:51:36.391222   17539 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-238617 host does not exist
	  To start a cluster, run: "minikube start -p download-only-238617"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-238617
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (12.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-382773 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-382773 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.232194513s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (12.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-382773
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-382773: exit status 85 (54.9714ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-238617 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p download-only-238617        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| delete  | -p download-only-238617        | download-only-238617 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC | 28 Aug 24 16:51 UTC |
	| start   | -o=json --download-only        | download-only-382773 | jenkins | v1.33.1 | 28 Aug 24 16:51 UTC |                     |
	|         | -p download-only-382773        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 16:51:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 16:51:50.131740   17810 out.go:345] Setting OutFile to fd 1 ...
	I0828 16:51:50.131843   17810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:50.131854   17810 out.go:358] Setting ErrFile to fd 2...
	I0828 16:51:50.131859   17810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 16:51:50.132041   17810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 16:51:50.132606   17810 out.go:352] Setting JSON to true
	I0828 16:51:50.133433   17810 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2056,"bootTime":1724861854,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 16:51:50.133487   17810 start.go:139] virtualization: kvm guest
	I0828 16:51:50.135509   17810 out.go:97] [download-only-382773] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 16:51:50.135644   17810 notify.go:220] Checking for updates...
	I0828 16:51:50.136919   17810 out.go:169] MINIKUBE_LOCATION=19529
	I0828 16:51:50.138182   17810 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 16:51:50.139449   17810 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 16:51:50.140440   17810 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 16:51:50.141398   17810 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0828 16:51:50.143441   17810 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0828 16:51:50.143674   17810 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 16:51:50.174655   17810 out.go:97] Using the kvm2 driver based on user configuration
	I0828 16:51:50.174680   17810 start.go:297] selected driver: kvm2
	I0828 16:51:50.174693   17810 start.go:901] validating driver "kvm2" against <nil>
	I0828 16:51:50.175049   17810 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 16:51:50.175140   17810 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19529-10317/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0828 16:51:50.189639   17810 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0828 16:51:50.189700   17810 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 16:51:50.190219   17810 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0828 16:51:50.190361   17810 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 16:51:50.190433   17810 cni.go:84] Creating CNI manager for ""
	I0828 16:51:50.190445   17810 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0828 16:51:50.190453   17810 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 16:51:50.190506   17810 start.go:340] cluster config:
	{Name:download-only-382773 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-382773 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 16:51:50.190591   17810 iso.go:125] acquiring lock: {Name:mka4c362a5716dcd382c27bdc11ff0046b15f66a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 16:51:50.192048   17810 out.go:97] Starting "download-only-382773" primary control-plane node in "download-only-382773" cluster
	I0828 16:51:50.192063   17810 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 16:51:50.707237   17810 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0828 16:51:50.707284   17810 cache.go:56] Caching tarball of preloaded images
	I0828 16:51:50.707469   17810 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0828 16:51:50.709223   17810 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0828 16:51:50.709236   17810 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0828 16:51:50.890596   17810 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19529-10317/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-382773 host does not exist
	  To start a cluster, run: "minikube start -p download-only-382773"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-382773
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-802579 --alsologtostderr --binary-mirror http://127.0.0.1:34799 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-802579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-802579
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (115.21s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-652855 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-652855 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m54.397105846s)
helpers_test.go:175: Cleaning up "offline-crio-652855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-652855
--- PASS: TestOffline (115.21s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-990097
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-990097: exit status 85 (49.030432ms)

                                                
                                                
-- stdout --
	* Profile "addons-990097" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-990097"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-990097
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-990097: exit status 85 (47.133825ms)

                                                
                                                
-- stdout --
	* Profile "addons-990097" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-990097"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (139.67s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-990097 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-990097 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m19.674330334s)
--- PASS: TestAddons/Setup (139.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-990097 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-990097 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.12s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2jkm8" [f6e9c3e4-3f7e-4e02-9584-c37d5f67f477] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005186032s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-990097
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-990097: (6.118572229s)
--- PASS: TestAddons/parallel/InspektorGadget (12.12s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 5.816237ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-wr7ks" [92061fbc-b8a2-4b6f-9ffa-c0ac60e817ab] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.004690312s
addons_test.go:475: (dbg) Run:  kubectl --context addons-990097 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-990097 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.171785344s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 7.153754ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-990097 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-990097 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f95c9446-44d7-432d-8e2d-5f087abac6d6] Pending
helpers_test.go:344: "task-pv-pod" [f95c9446-44d7-432d-8e2d-5f087abac6d6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f95c9446-44d7-432d-8e2d-5f087abac6d6] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004386852s
addons_test.go:590: (dbg) Run:  kubectl --context addons-990097 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-990097 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-990097 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-990097 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-990097 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-990097 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-990097 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [52c57fc7-e144-4d99-ad1b-f4d36d86951b] Pending
helpers_test.go:344: "task-pv-pod-restore" [52c57fc7-e144-4d99-ad1b-f4d36d86951b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [52c57fc7-e144-4d99-ad1b-f4d36d86951b] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004150023s
addons_test.go:632: (dbg) Run:  kubectl --context addons-990097 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-990097 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-990097 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-990097 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.718023194s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-990097 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-hnnqc" [6faf6806-732f-4d7c-973a-e619043aebcf] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-hnnqc" [6faf6806-732f-4d7c-973a-e619043aebcf] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-hnnqc" [6faf6806-732f-4d7c-973a-e619043aebcf] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004211797s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-990097 addons disable headlamp --alsologtostderr -v=1: (5.678839767s)
--- PASS: TestAddons/parallel/Headlamp (18.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-znl98" [6deda0a2-a0db-4d93-b2ee-9436be933ce2] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005501697s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-990097
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-990097 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-990097 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-990097 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0676fa72-54fc-4f84-8398-9fa6fe5690d5] Pending
helpers_test.go:344: "test-local-path" [0676fa72-54fc-4f84-8398-9fa6fe5690d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0676fa72-54fc-4f84-8398-9fa6fe5690d5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0676fa72-54fc-4f84-8398-9fa6fe5690d5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004215065s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-990097 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 ssh "cat /opt/local-path-provisioner/pvc-a9f55e23-5044-48c9-a5ea-14e15cbb19c6_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-990097 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-990097 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-j24tf" [fda32bb5-afc7-4b0f-939f-fe0614025dc2] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004510758s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-990097
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-txzm4" [9a696f56-56f3-41bb-9969-e2f45bacd1a0] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005570114s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-990097 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-990097 addons disable yakd --alsologtostderr -v=1: (5.81194569s)
--- PASS: TestAddons/parallel/Yakd (11.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-990097
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-990097: (1m32.421639924s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-990097
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-990097
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-990097
--- PASS: TestAddons/StoppedEnableDisable (92.69s)

                                                
                                    
x
+
TestCertOptions (61.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-700088 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-700088 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m0.068463567s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-700088 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-700088 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-700088 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-700088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-700088
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-700088: (1.000549844s)
--- PASS: TestCertOptions (61.50s)

                                                
                                    
x
+
TestCertExpiration (336.41s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-523070 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-523070 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m55.163367326s)
E0828 18:04:06.594888   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-523070 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-523070 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.2537411s)
helpers_test.go:175: Cleaning up "cert-expiration-523070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-523070
E0828 18:07:43.308386   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestCertExpiration (336.41s)

                                                
                                    
x
+
TestForceSystemdFlag (102.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-119099 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0828 18:03:00.240422   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-119099 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m41.104144993s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-119099 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-119099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-119099
--- PASS: TestForceSystemdFlag (102.31s)

                                                
                                    
x
+
TestForceSystemdEnv (47.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-755013 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-755013 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.487709312s)
helpers_test.go:175: Cleaning up "force-systemd-env-755013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-755013
--- PASS: TestForceSystemdEnv (47.45s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.28s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.28s)

                                                
                                    
x
+
TestErrorSpam/setup (37.36s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-063364 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-063364 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-063364 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-063364 --driver=kvm2  --container-runtime=crio: (37.356557405s)
--- PASS: TestErrorSpam/setup (37.36s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 unpause
--- PASS: TestErrorSpam/unpause (1.62s)

                                                
                                    
x
+
TestErrorSpam/stop (5.07s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 stop: (1.515127116s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 stop: (1.591809768s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-063364 --log_dir /tmp/nospam-063364 stop: (1.961038206s)
--- PASS: TestErrorSpam/stop (5.07s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19529-10317/.minikube/files/etc/test/nested/copy/17528/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-682131 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-682131 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (48.495766808s)
--- PASS: TestFunctional/serial/StartWithProxy (48.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-682131 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-682131 --alsologtostderr -v=8: (34.406370505s)
functional_test.go:663: soft start took 34.406960174s for "functional-682131" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-682131 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-682131 cache add registry.k8s.io/pause:3.1: (1.205722649s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-682131 cache add registry.k8s.io/pause:3.3: (1.461260736s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-682131 cache add registry.k8s.io/pause:latest: (1.200003341s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-682131 /tmp/TestFunctionalserialCacheCmdcacheadd_local4048450709/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 cache add minikube-local-cache-test:functional-682131
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-682131 cache add minikube-local-cache-test:functional-682131: (1.777723974s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 cache delete minikube-local-cache-test:functional-682131
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-682131
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-682131 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (204.152076ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-682131 cache reload: (1.006017323s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 kubectl -- --context functional-682131 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-682131 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.23s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-682131 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-682131 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.224646938s)
functional_test.go:761: restart took 33.224786335s for "functional-682131" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.23s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-682131 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-682131 logs: (1.353215779s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 logs --file /tmp/TestFunctionalserialLogsFileCmd323134686/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-682131 logs --file /tmp/TestFunctionalserialLogsFileCmd323134686/001/logs.txt: (1.39236096s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-682131 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-682131
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-682131: exit status 115 (268.313544ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.23:31956 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-682131 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-682131 delete -f testdata/invalidsvc.yaml: (1.319341714s)
--- PASS: TestFunctional/serial/InvalidService (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-682131 config get cpus: exit status 14 (49.163283ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-682131 config get cpus: exit status 14 (43.646548ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-682131 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-682131 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28261: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-682131 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-682131 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (145.237916ms)

                                                
                                                
-- stdout --
	* [functional-682131] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:13:14.348191   27993 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:13:14.348494   27993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:13:14.348505   27993 out.go:358] Setting ErrFile to fd 2...
	I0828 17:13:14.348511   27993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:13:14.348766   27993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:13:14.349392   27993 out.go:352] Setting JSON to false
	I0828 17:13:14.350589   27993 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3340,"bootTime":1724861854,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 17:13:14.350668   27993 start.go:139] virtualization: kvm guest
	I0828 17:13:14.352843   27993 out.go:177] * [functional-682131] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 17:13:14.354460   27993 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:13:14.354462   27993 notify.go:220] Checking for updates...
	I0828 17:13:14.357454   27993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:13:14.359116   27993 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:13:14.360802   27993 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:13:14.362223   27993 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 17:13:14.363857   27993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:13:14.365906   27993 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:13:14.366411   27993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:13:14.366469   27993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:13:14.381326   27993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45047
	I0828 17:13:14.381689   27993 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:13:14.382287   27993 main.go:141] libmachine: Using API Version  1
	I0828 17:13:14.382319   27993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:13:14.382707   27993 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:13:14.382899   27993 main.go:141] libmachine: (functional-682131) Calling .DriverName
	I0828 17:13:14.383144   27993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:13:14.383531   27993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:13:14.383570   27993 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:13:14.398611   27993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0828 17:13:14.399092   27993 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:13:14.399726   27993 main.go:141] libmachine: Using API Version  1
	I0828 17:13:14.399760   27993 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:13:14.400059   27993 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:13:14.400268   27993 main.go:141] libmachine: (functional-682131) Calling .DriverName
	I0828 17:13:14.434565   27993 out.go:177] * Using the kvm2 driver based on existing profile
	I0828 17:13:14.435733   27993 start.go:297] selected driver: kvm2
	I0828 17:13:14.435755   27993 start.go:901] validating driver "kvm2" against &{Name:functional-682131 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-682131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:13:14.435851   27993 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:13:14.437693   27993 out.go:201] 
	W0828 17:13:14.438890   27993 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0828 17:13:14.440191   27993 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-682131 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-682131 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-682131 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (143.545694ms)

                                                
                                                
-- stdout --
	* [functional-682131] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:13:01.333109   26767 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:13:01.333224   26767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:13:01.333236   26767 out.go:358] Setting ErrFile to fd 2...
	I0828 17:13:01.333240   26767 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:13:01.333517   26767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:13:01.334475   26767 out.go:352] Setting JSON to false
	I0828 17:13:01.335434   26767 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3327,"bootTime":1724861854,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 17:13:01.335498   26767 start.go:139] virtualization: kvm guest
	I0828 17:13:01.337699   26767 out.go:177] * [functional-682131] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0828 17:13:01.339294   26767 notify.go:220] Checking for updates...
	I0828 17:13:01.339306   26767 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 17:13:01.340955   26767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 17:13:01.342405   26767 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 17:13:01.343557   26767 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 17:13:01.344643   26767 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 17:13:01.345787   26767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 17:13:01.347696   26767 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:13:01.348298   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:13:01.348363   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:13:01.364232   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33095
	I0828 17:13:01.364642   26767 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:13:01.365211   26767 main.go:141] libmachine: Using API Version  1
	I0828 17:13:01.365229   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:13:01.365596   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:13:01.365792   26767 main.go:141] libmachine: (functional-682131) Calling .DriverName
	I0828 17:13:01.366120   26767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 17:13:01.366561   26767 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:13:01.366608   26767 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:13:01.381695   26767 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36495
	I0828 17:13:01.382162   26767 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:13:01.382618   26767 main.go:141] libmachine: Using API Version  1
	I0828 17:13:01.382641   26767 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:13:01.382949   26767 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:13:01.383158   26767 main.go:141] libmachine: (functional-682131) Calling .DriverName
	I0828 17:13:01.416759   26767 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0828 17:13:01.417871   26767 start.go:297] selected driver: kvm2
	I0828 17:13:01.417892   26767 start.go:901] validating driver "kvm2" against &{Name:functional-682131 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-682131 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 17:13:01.418010   26767 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 17:13:01.420190   26767 out.go:201] 
	W0828 17:13:01.421333   26767 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0828 17:13:01.422457   26767 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-682131 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-682131 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-dt9cs" [aa8337a8-6a48-4905-9ddc-351feeaf4556] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-dt9cs" [aa8337a8-6a48-4905-9ddc-351feeaf4556] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003923331s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.23:31157
functional_test.go:1675: http://192.168.39.23:31157: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-dt9cs

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.23:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.23:31157
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4ece3b18-f9ae-4e0c-9305-4507d7fad872] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005219421s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-682131 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-682131 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-682131 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-682131 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [907d2980-5740-4198-89ee-82e9a8b48857] Pending
helpers_test.go:344: "sp-pod" [907d2980-5740-4198-89ee-82e9a8b48857] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [907d2980-5740-4198-89ee-82e9a8b48857] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004019881s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-682131 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-682131 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-682131 delete -f testdata/storage-provisioner/pod.yaml: (1.51471695s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-682131 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dade9d91-aed3-48a3-974b-de1e66462f63] Pending
2024/08/28 17:13:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [dade9d91-aed3-48a3-974b-de1e66462f63] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dade9d91-aed3-48a3-974b-de1e66462f63] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.004652455s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-682131 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.33s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh -n functional-682131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 cp functional-682131:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd323536142/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh -n functional-682131 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh -n functional-682131 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-682131 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-p9dg5" [39a9c936-b19d-4ac7-9f18-148933b48a5f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-p9dg5" [39a9c936-b19d-4ac7-9f18-148933b48a5f] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.003660302s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-682131 exec mysql-6cdb49bbb-p9dg5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-682131 exec mysql-6cdb49bbb-p9dg5 -- mysql -ppassword -e "show databases;": exit status 1 (120.555309ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-682131 exec mysql-6cdb49bbb-p9dg5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/17528/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "sudo cat /etc/test/nested/copy/17528/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/17528.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "sudo cat /etc/ssl/certs/17528.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/17528.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "sudo cat /usr/share/ca-certificates/17528.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/175282.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "sudo cat /etc/ssl/certs/175282.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/175282.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "sudo cat /usr/share/ca-certificates/175282.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-682131 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-682131 ssh "sudo systemctl is-active docker": exit status 1 (256.865573ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-682131 ssh "sudo systemctl is-active containerd": exit status 1 (213.717068ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-682131 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-682131 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-cdrd2" [e262e0ff-8ede-4556-a4cf-db26c43323fc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-cdrd2" [e262e0ff-8ede-4556-a4cf-db26c43323fc] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003791419s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "304.721619ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "53.453145ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "262.397197ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.338076ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-682131 /tmp/TestFunctionalparallelMountCmdany-port2563057807/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724865182442245741" to /tmp/TestFunctionalparallelMountCmdany-port2563057807/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724865182442245741" to /tmp/TestFunctionalparallelMountCmdany-port2563057807/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724865182442245741" to /tmp/TestFunctionalparallelMountCmdany-port2563057807/001/test-1724865182442245741
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-682131 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (251.248026ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 28 17:13 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 28 17:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 28 17:13 test-1724865182442245741
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh cat /mount-9p/test-1724865182442245741
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-682131 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [441e2ec9-03d9-42be-8c38-e97ceee017dd] Pending
helpers_test.go:344: "busybox-mount" [441e2ec9-03d9-42be-8c38-e97ceee017dd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [441e2ec9-03d9-42be-8c38-e97ceee017dd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [441e2ec9-03d9-42be-8c38-e97ceee017dd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003586834s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-682131 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-682131 /tmp/TestFunctionalparallelMountCmdany-port2563057807/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-682131 /tmp/TestFunctionalparallelMountCmdspecific-port782941355/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-682131 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.225342ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-682131 /tmp/TestFunctionalparallelMountCmdspecific-port782941355/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-682131 ssh "sudo umount -f /mount-9p": exit status 1 (218.298023ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-682131 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-682131 /tmp/TestFunctionalparallelMountCmdspecific-port782941355/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 service list -o json
functional_test.go:1494: Took "325.695826ms" to run "out/minikube-linux-amd64 -p functional-682131 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.23:32401
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-682131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3195064095/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-682131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3195064095/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-682131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3195064095/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-682131 ssh "findmnt -T" /mount1: exit status 1 (302.258581ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-682131 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-682131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3195064095/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-682131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3195064095/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-682131 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3195064095/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.23:32401
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-682131 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-682131
localhost/kicbase/echo-server:functional-682131
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-682131 image ls --format short --alsologtostderr:
I0828 17:13:24.954841   28798 out.go:345] Setting OutFile to fd 1 ...
I0828 17:13:24.955079   28798 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:13:24.955088   28798 out.go:358] Setting ErrFile to fd 2...
I0828 17:13:24.955092   28798 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:13:24.955294   28798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
I0828 17:13:24.955863   28798 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0828 17:13:24.955962   28798 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0828 17:13:24.956374   28798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0828 17:13:24.956435   28798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0828 17:13:24.971995   28798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33559
I0828 17:13:24.972433   28798 main.go:141] libmachine: () Calling .GetVersion
I0828 17:13:24.972986   28798 main.go:141] libmachine: Using API Version  1
I0828 17:13:24.973005   28798 main.go:141] libmachine: () Calling .SetConfigRaw
I0828 17:13:24.973326   28798 main.go:141] libmachine: () Calling .GetMachineName
I0828 17:13:24.973534   28798 main.go:141] libmachine: (functional-682131) Calling .GetState
I0828 17:13:24.975238   28798 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0828 17:13:24.975281   28798 main.go:141] libmachine: Launching plugin server for driver kvm2
I0828 17:13:24.991839   28798 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43605
I0828 17:13:24.992284   28798 main.go:141] libmachine: () Calling .GetVersion
I0828 17:13:24.992836   28798 main.go:141] libmachine: Using API Version  1
I0828 17:13:24.992867   28798 main.go:141] libmachine: () Calling .SetConfigRaw
I0828 17:13:24.993172   28798 main.go:141] libmachine: () Calling .GetMachineName
I0828 17:13:24.993336   28798 main.go:141] libmachine: (functional-682131) Calling .DriverName
I0828 17:13:24.993515   28798 ssh_runner.go:195] Run: systemctl --version
I0828 17:13:24.993538   28798 main.go:141] libmachine: (functional-682131) Calling .GetSSHHostname
I0828 17:13:24.996203   28798 main.go:141] libmachine: (functional-682131) DBG | domain functional-682131 has defined MAC address 52:54:00:ae:b0:dd in network mk-functional-682131
I0828 17:13:24.996541   28798 main.go:141] libmachine: (functional-682131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:dd", ip: ""} in network mk-functional-682131: {Iface:virbr1 ExpiryTime:2024-08-28 18:11:01 +0000 UTC Type:0 Mac:52:54:00:ae:b0:dd Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:functional-682131 Clientid:01:52:54:00:ae:b0:dd}
I0828 17:13:24.996568   28798 main.go:141] libmachine: (functional-682131) DBG | domain functional-682131 has defined IP address 192.168.39.23 and MAC address 52:54:00:ae:b0:dd in network mk-functional-682131
I0828 17:13:24.996727   28798 main.go:141] libmachine: (functional-682131) Calling .GetSSHPort
I0828 17:13:24.996884   28798 main.go:141] libmachine: (functional-682131) Calling .GetSSHKeyPath
I0828 17:13:24.997036   28798 main.go:141] libmachine: (functional-682131) Calling .GetSSHUsername
I0828 17:13:24.997176   28798 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/functional-682131/id_rsa Username:docker}
I0828 17:13:25.080152   28798 ssh_runner.go:195] Run: sudo crictl images --output json
I0828 17:13:25.118354   28798 main.go:141] libmachine: Making call to close driver server
I0828 17:13:25.118377   28798 main.go:141] libmachine: (functional-682131) Calling .Close
I0828 17:13:25.118650   28798 main.go:141] libmachine: Successfully made call to close driver server
I0828 17:13:25.118668   28798 main.go:141] libmachine: Making call to close connection to plugin binary
I0828 17:13:25.118678   28798 main.go:141] libmachine: Making call to close driver server
I0828 17:13:25.118686   28798 main.go:141] libmachine: (functional-682131) Calling .Close
I0828 17:13:25.118709   28798 main.go:141] libmachine: (functional-682131) DBG | Closing plugin on server side
I0828 17:13:25.118913   28798 main.go:141] libmachine: (functional-682131) DBG | Closing plugin on server side
I0828 17:13:25.118921   28798 main.go:141] libmachine: Successfully made call to close driver server
I0828 17:13:25.118935   28798 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-682131 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/nginx                 | latest             | 5ef79149e0ec8 | 192MB  |
| localhost/minikube-local-cache-test     | functional-682131  | c2ad379b94605 | 3.33kB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/kicbase/echo-server           | functional-682131  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-682131 image ls --format table --alsologtostderr:
I0828 17:13:25.469700   28924 out.go:345] Setting OutFile to fd 1 ...
I0828 17:13:25.469814   28924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:13:25.469824   28924 out.go:358] Setting ErrFile to fd 2...
I0828 17:13:25.469830   28924 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:13:25.470019   28924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
I0828 17:13:25.470613   28924 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0828 17:13:25.470704   28924 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0828 17:13:25.471072   28924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0828 17:13:25.471125   28924 main.go:141] libmachine: Launching plugin server for driver kvm2
I0828 17:13:25.486232   28924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
I0828 17:13:25.486769   28924 main.go:141] libmachine: () Calling .GetVersion
I0828 17:13:25.487406   28924 main.go:141] libmachine: Using API Version  1
I0828 17:13:25.487435   28924 main.go:141] libmachine: () Calling .SetConfigRaw
I0828 17:13:25.487764   28924 main.go:141] libmachine: () Calling .GetMachineName
I0828 17:13:25.488020   28924 main.go:141] libmachine: (functional-682131) Calling .GetState
I0828 17:13:25.489921   28924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0828 17:13:25.489963   28924 main.go:141] libmachine: Launching plugin server for driver kvm2
I0828 17:13:25.505051   28924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42575
I0828 17:13:25.505442   28924 main.go:141] libmachine: () Calling .GetVersion
I0828 17:13:25.505871   28924 main.go:141] libmachine: Using API Version  1
I0828 17:13:25.505893   28924 main.go:141] libmachine: () Calling .SetConfigRaw
I0828 17:13:25.506247   28924 main.go:141] libmachine: () Calling .GetMachineName
I0828 17:13:25.506469   28924 main.go:141] libmachine: (functional-682131) Calling .DriverName
I0828 17:13:25.506700   28924 ssh_runner.go:195] Run: systemctl --version
I0828 17:13:25.506727   28924 main.go:141] libmachine: (functional-682131) Calling .GetSSHHostname
I0828 17:13:25.509832   28924 main.go:141] libmachine: (functional-682131) DBG | domain functional-682131 has defined MAC address 52:54:00:ae:b0:dd in network mk-functional-682131
I0828 17:13:25.510306   28924 main.go:141] libmachine: (functional-682131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:dd", ip: ""} in network mk-functional-682131: {Iface:virbr1 ExpiryTime:2024-08-28 18:11:01 +0000 UTC Type:0 Mac:52:54:00:ae:b0:dd Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:functional-682131 Clientid:01:52:54:00:ae:b0:dd}
I0828 17:13:25.510336   28924 main.go:141] libmachine: (functional-682131) DBG | domain functional-682131 has defined IP address 192.168.39.23 and MAC address 52:54:00:ae:b0:dd in network mk-functional-682131
I0828 17:13:25.510509   28924 main.go:141] libmachine: (functional-682131) Calling .GetSSHPort
I0828 17:13:25.510682   28924 main.go:141] libmachine: (functional-682131) Calling .GetSSHKeyPath
I0828 17:13:25.510850   28924 main.go:141] libmachine: (functional-682131) Calling .GetSSHUsername
I0828 17:13:25.511007   28924 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/functional-682131/id_rsa Username:docker}
I0828 17:13:25.623148   28924 ssh_runner.go:195] Run: sudo crictl images --output json
I0828 17:13:25.694870   28924 main.go:141] libmachine: Making call to close driver server
I0828 17:13:25.694889   28924 main.go:141] libmachine: (functional-682131) Calling .Close
I0828 17:13:25.695278   28924 main.go:141] libmachine: Successfully made call to close driver server
I0828 17:13:25.695323   28924 main.go:141] libmachine: Making call to close connection to plugin binary
I0828 17:13:25.695342   28924 main.go:141] libmachine: Making call to close driver server
I0828 17:13:25.695352   28924 main.go:141] libmachine: (functional-682131) Calling .Close
I0828 17:13:25.695619   28924 main.go:141] libmachine: Successfully made call to close driver server
I0828 17:13:25.695636   28924 main.go:141] libmachine: Making call to close connection to plugin binary
I0828 17:13:25.695660   28924 main.go:141] libmachine: (functional-682131) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-682131 image ls --format json --alsologtostderr:
[{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-682131"],"size":"4943877"},{"id":"cbb01a7bd410dc08ba382018ab9
09a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigest
s":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add","docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f"],"repoTags":["docker.io/library/nginx:latest"],"size":"191841612"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39be
dd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":["registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6
ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c2ad379b94605dce36698944749eb438260bf79b344720dec9c42ae72c8d47f9","repoDigests":["localhost/minikube-local-cache-test@sha256:ddf740d3c74f3f6f1abf8f71e837179c37fbdc6c124726394820e2272dce5b55"],"repoTags":["localhost/minik
ube-local-cache-test:functional-682131"],"size":"3330"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-682131 image ls --format json --alsologtostderr:
I0828 17:13:25.217112   28860 out.go:345] Setting OutFile to fd 1 ...
I0828 17:13:25.217444   28860 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:13:25.217457   28860 out.go:358] Setting ErrFile to fd 2...
I0828 17:13:25.217466   28860 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:13:25.217889   28860 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
I0828 17:13:25.219253   28860 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0828 17:13:25.219371   28860 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0828 17:13:25.219831   28860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0828 17:13:25.219876   28860 main.go:141] libmachine: Launching plugin server for driver kvm2
I0828 17:13:25.235845   28860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41497
I0828 17:13:25.236307   28860 main.go:141] libmachine: () Calling .GetVersion
I0828 17:13:25.236998   28860 main.go:141] libmachine: Using API Version  1
I0828 17:13:25.237029   28860 main.go:141] libmachine: () Calling .SetConfigRaw
I0828 17:13:25.237461   28860 main.go:141] libmachine: () Calling .GetMachineName
I0828 17:13:25.237642   28860 main.go:141] libmachine: (functional-682131) Calling .GetState
I0828 17:13:25.239440   28860 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0828 17:13:25.239474   28860 main.go:141] libmachine: Launching plugin server for driver kvm2
I0828 17:13:25.254517   28860 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42055
I0828 17:13:25.254882   28860 main.go:141] libmachine: () Calling .GetVersion
I0828 17:13:25.255297   28860 main.go:141] libmachine: Using API Version  1
I0828 17:13:25.255324   28860 main.go:141] libmachine: () Calling .SetConfigRaw
I0828 17:13:25.255706   28860 main.go:141] libmachine: () Calling .GetMachineName
I0828 17:13:25.255887   28860 main.go:141] libmachine: (functional-682131) Calling .DriverName
I0828 17:13:25.256039   28860 ssh_runner.go:195] Run: systemctl --version
I0828 17:13:25.256063   28860 main.go:141] libmachine: (functional-682131) Calling .GetSSHHostname
I0828 17:13:25.258782   28860 main.go:141] libmachine: (functional-682131) DBG | domain functional-682131 has defined MAC address 52:54:00:ae:b0:dd in network mk-functional-682131
I0828 17:13:25.259124   28860 main.go:141] libmachine: (functional-682131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:dd", ip: ""} in network mk-functional-682131: {Iface:virbr1 ExpiryTime:2024-08-28 18:11:01 +0000 UTC Type:0 Mac:52:54:00:ae:b0:dd Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:functional-682131 Clientid:01:52:54:00:ae:b0:dd}
I0828 17:13:25.259139   28860 main.go:141] libmachine: (functional-682131) DBG | domain functional-682131 has defined IP address 192.168.39.23 and MAC address 52:54:00:ae:b0:dd in network mk-functional-682131
I0828 17:13:25.259295   28860 main.go:141] libmachine: (functional-682131) Calling .GetSSHPort
I0828 17:13:25.259409   28860 main.go:141] libmachine: (functional-682131) Calling .GetSSHKeyPath
I0828 17:13:25.259558   28860 main.go:141] libmachine: (functional-682131) Calling .GetSSHUsername
I0828 17:13:25.259643   28860 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/functional-682131/id_rsa Username:docker}
I0828 17:13:25.349821   28860 ssh_runner.go:195] Run: sudo crictl images --output json
I0828 17:13:25.418196   28860 main.go:141] libmachine: Making call to close driver server
I0828 17:13:25.418217   28860 main.go:141] libmachine: (functional-682131) Calling .Close
I0828 17:13:25.418449   28860 main.go:141] libmachine: Successfully made call to close driver server
I0828 17:13:25.418465   28860 main.go:141] libmachine: Making call to close connection to plugin binary
I0828 17:13:25.418477   28860 main.go:141] libmachine: Making call to close driver server
I0828 17:13:25.418488   28860 main.go:141] libmachine: (functional-682131) Calling .Close
I0828 17:13:25.418488   28860 main.go:141] libmachine: (functional-682131) DBG | Closing plugin on server side
I0828 17:13:25.418697   28860 main.go:141] libmachine: Successfully made call to close driver server
I0828 17:13:25.418712   28860 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-682131 image ls --format yaml --alsologtostderr:
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-682131
size: "4943877"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
- docker.io/library/nginx@sha256:5f0574409b3add89581b96c68afe9e9c7b284651c3a974b6e8bac46bf95e6b7f
repoTags:
- docker.io/library/nginx:latest
size: "191841612"
- id: c2ad379b94605dce36698944749eb438260bf79b344720dec9c42ae72c8d47f9
repoDigests:
- localhost/minikube-local-cache-test@sha256:ddf740d3c74f3f6f1abf8f71e837179c37fbdc6c124726394820e2272dce5b55
repoTags:
- localhost/minikube-local-cache-test:functional-682131
size: "3330"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-682131 image ls --format yaml --alsologtostderr:
I0828 17:13:24.993066   28810 out.go:345] Setting OutFile to fd 1 ...
I0828 17:13:24.993359   28810 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:13:24.993370   28810 out.go:358] Setting ErrFile to fd 2...
I0828 17:13:24.993377   28810 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:13:24.993635   28810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
I0828 17:13:24.994390   28810 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0828 17:13:24.994527   28810 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0828 17:13:24.995074   28810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0828 17:13:24.995128   28810 main.go:141] libmachine: Launching plugin server for driver kvm2
I0828 17:13:25.010480   28810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33481
I0828 17:13:25.010892   28810 main.go:141] libmachine: () Calling .GetVersion
I0828 17:13:25.011517   28810 main.go:141] libmachine: Using API Version  1
I0828 17:13:25.011539   28810 main.go:141] libmachine: () Calling .SetConfigRaw
I0828 17:13:25.011896   28810 main.go:141] libmachine: () Calling .GetMachineName
I0828 17:13:25.012100   28810 main.go:141] libmachine: (functional-682131) Calling .GetState
I0828 17:13:25.013987   28810 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0828 17:13:25.014025   28810 main.go:141] libmachine: Launching plugin server for driver kvm2
I0828 17:13:25.028642   28810 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42187
I0828 17:13:25.029098   28810 main.go:141] libmachine: () Calling .GetVersion
I0828 17:13:25.029747   28810 main.go:141] libmachine: Using API Version  1
I0828 17:13:25.029781   28810 main.go:141] libmachine: () Calling .SetConfigRaw
I0828 17:13:25.030153   28810 main.go:141] libmachine: () Calling .GetMachineName
I0828 17:13:25.030378   28810 main.go:141] libmachine: (functional-682131) Calling .DriverName
I0828 17:13:25.030616   28810 ssh_runner.go:195] Run: systemctl --version
I0828 17:13:25.030660   28810 main.go:141] libmachine: (functional-682131) Calling .GetSSHHostname
I0828 17:13:25.033473   28810 main.go:141] libmachine: (functional-682131) DBG | domain functional-682131 has defined MAC address 52:54:00:ae:b0:dd in network mk-functional-682131
I0828 17:13:25.033925   28810 main.go:141] libmachine: (functional-682131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:dd", ip: ""} in network mk-functional-682131: {Iface:virbr1 ExpiryTime:2024-08-28 18:11:01 +0000 UTC Type:0 Mac:52:54:00:ae:b0:dd Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:functional-682131 Clientid:01:52:54:00:ae:b0:dd}
I0828 17:13:25.033952   28810 main.go:141] libmachine: (functional-682131) DBG | domain functional-682131 has defined IP address 192.168.39.23 and MAC address 52:54:00:ae:b0:dd in network mk-functional-682131
I0828 17:13:25.034132   28810 main.go:141] libmachine: (functional-682131) Calling .GetSSHPort
I0828 17:13:25.034281   28810 main.go:141] libmachine: (functional-682131) Calling .GetSSHKeyPath
I0828 17:13:25.034442   28810 main.go:141] libmachine: (functional-682131) Calling .GetSSHUsername
I0828 17:13:25.034583   28810 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/functional-682131/id_rsa Username:docker}
I0828 17:13:25.122011   28810 ssh_runner.go:195] Run: sudo crictl images --output json
I0828 17:13:25.168272   28810 main.go:141] libmachine: Making call to close driver server
I0828 17:13:25.168288   28810 main.go:141] libmachine: (functional-682131) Calling .Close
I0828 17:13:25.168616   28810 main.go:141] libmachine: Successfully made call to close driver server
I0828 17:13:25.168603   28810 main.go:141] libmachine: (functional-682131) DBG | Closing plugin on server side
I0828 17:13:25.168632   28810 main.go:141] libmachine: Making call to close connection to plugin binary
I0828 17:13:25.168643   28810 main.go:141] libmachine: Making call to close driver server
I0828 17:13:25.168651   28810 main.go:141] libmachine: (functional-682131) Calling .Close
I0828 17:13:25.168867   28810 main.go:141] libmachine: (functional-682131) DBG | Closing plugin on server side
I0828 17:13:25.168919   28810 main.go:141] libmachine: Successfully made call to close driver server
I0828 17:13:25.168931   28810 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-682131 ssh pgrep buildkitd: exit status 1 (214.533388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image build -t localhost/my-image:functional-682131 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-682131 image build -t localhost/my-image:functional-682131 testdata/build --alsologtostderr: (3.487024937s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-682131 image build -t localhost/my-image:functional-682131 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6e9f2057d90
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-682131
--> 3814ff3c90d
Successfully tagged localhost/my-image:functional-682131
3814ff3c90d1dcfb3effdc977a3bf28285922abe3d702ec4403952f3bd83fdf9
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-682131 image build -t localhost/my-image:functional-682131 testdata/build --alsologtostderr:
I0828 17:13:25.383635   28900 out.go:345] Setting OutFile to fd 1 ...
I0828 17:13:25.383951   28900 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:13:25.383964   28900 out.go:358] Setting ErrFile to fd 2...
I0828 17:13:25.383971   28900 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 17:13:25.384232   28900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
I0828 17:13:25.385170   28900 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0828 17:13:25.385768   28900 config.go:182] Loaded profile config "functional-682131": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0828 17:13:25.386182   28900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0828 17:13:25.386223   28900 main.go:141] libmachine: Launching plugin server for driver kvm2
I0828 17:13:25.401791   28900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
I0828 17:13:25.402263   28900 main.go:141] libmachine: () Calling .GetVersion
I0828 17:13:25.402889   28900 main.go:141] libmachine: Using API Version  1
I0828 17:13:25.402912   28900 main.go:141] libmachine: () Calling .SetConfigRaw
I0828 17:13:25.403269   28900 main.go:141] libmachine: () Calling .GetMachineName
I0828 17:13:25.403467   28900 main.go:141] libmachine: (functional-682131) Calling .GetState
I0828 17:13:25.405687   28900 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0828 17:13:25.405735   28900 main.go:141] libmachine: Launching plugin server for driver kvm2
I0828 17:13:25.420946   28900 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45495
I0828 17:13:25.421283   28900 main.go:141] libmachine: () Calling .GetVersion
I0828 17:13:25.421757   28900 main.go:141] libmachine: Using API Version  1
I0828 17:13:25.421782   28900 main.go:141] libmachine: () Calling .SetConfigRaw
I0828 17:13:25.422212   28900 main.go:141] libmachine: () Calling .GetMachineName
I0828 17:13:25.422417   28900 main.go:141] libmachine: (functional-682131) Calling .DriverName
I0828 17:13:25.422714   28900 ssh_runner.go:195] Run: systemctl --version
I0828 17:13:25.422752   28900 main.go:141] libmachine: (functional-682131) Calling .GetSSHHostname
I0828 17:13:25.425798   28900 main.go:141] libmachine: (functional-682131) DBG | domain functional-682131 has defined MAC address 52:54:00:ae:b0:dd in network mk-functional-682131
I0828 17:13:25.426245   28900 main.go:141] libmachine: (functional-682131) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:b0:dd", ip: ""} in network mk-functional-682131: {Iface:virbr1 ExpiryTime:2024-08-28 18:11:01 +0000 UTC Type:0 Mac:52:54:00:ae:b0:dd Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:functional-682131 Clientid:01:52:54:00:ae:b0:dd}
I0828 17:13:25.426285   28900 main.go:141] libmachine: (functional-682131) DBG | domain functional-682131 has defined IP address 192.168.39.23 and MAC address 52:54:00:ae:b0:dd in network mk-functional-682131
I0828 17:13:25.426381   28900 main.go:141] libmachine: (functional-682131) Calling .GetSSHPort
I0828 17:13:25.426540   28900 main.go:141] libmachine: (functional-682131) Calling .GetSSHKeyPath
I0828 17:13:25.426682   28900 main.go:141] libmachine: (functional-682131) Calling .GetSSHUsername
I0828 17:13:25.426831   28900 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/functional-682131/id_rsa Username:docker}
I0828 17:13:25.548253   28900 build_images.go:161] Building image from path: /tmp/build.1096951304.tar
I0828 17:13:25.548353   28900 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0828 17:13:25.566185   28900 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1096951304.tar
I0828 17:13:25.574415   28900 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1096951304.tar: stat -c "%s %y" /var/lib/minikube/build/build.1096951304.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1096951304.tar': No such file or directory
I0828 17:13:25.574443   28900 ssh_runner.go:362] scp /tmp/build.1096951304.tar --> /var/lib/minikube/build/build.1096951304.tar (3072 bytes)
I0828 17:13:25.622054   28900 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1096951304
I0828 17:13:25.646838   28900 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1096951304 -xf /var/lib/minikube/build/build.1096951304.tar
I0828 17:13:25.676341   28900 crio.go:315] Building image: /var/lib/minikube/build/build.1096951304
I0828 17:13:25.676425   28900 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-682131 /var/lib/minikube/build/build.1096951304 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0828 17:13:28.791675   28900 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-682131 /var/lib/minikube/build/build.1096951304 --cgroup-manager=cgroupfs: (3.115222644s)
I0828 17:13:28.791756   28900 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1096951304
I0828 17:13:28.808904   28900 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1096951304.tar
I0828 17:13:28.821572   28900 build_images.go:217] Built localhost/my-image:functional-682131 from /tmp/build.1096951304.tar
I0828 17:13:28.821614   28900 build_images.go:133] succeeded building to: functional-682131
I0828 17:13:28.821621   28900 build_images.go:134] failed building to: 
I0828 17:13:28.821652   28900 main.go:141] libmachine: Making call to close driver server
I0828 17:13:28.821668   28900 main.go:141] libmachine: (functional-682131) Calling .Close
I0828 17:13:28.821942   28900 main.go:141] libmachine: Successfully made call to close driver server
I0828 17:13:28.821962   28900 main.go:141] libmachine: Making call to close connection to plugin binary
I0828 17:13:28.821971   28900 main.go:141] libmachine: Making call to close driver server
I0828 17:13:28.821977   28900 main.go:141] libmachine: (functional-682131) DBG | Closing plugin on server side
I0828 17:13:28.821981   28900 main.go:141] libmachine: (functional-682131) Calling .Close
I0828 17:13:28.822234   28900 main.go:141] libmachine: Successfully made call to close driver server
I0828 17:13:28.822253   28900 main.go:141] libmachine: (functional-682131) DBG | Closing plugin on server side
I0828 17:13:28.822256   28900 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.763722856s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-682131
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image load --daemon kicbase/echo-server:functional-682131 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-682131 image load --daemon kicbase/echo-server:functional-682131 --alsologtostderr: (1.477759429s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image load --daemon kicbase/echo-server:functional-682131 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-682131
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image load --daemon kicbase/echo-server:functional-682131 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image save kicbase/echo-server:functional-682131 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image rm kicbase/echo-server:functional-682131 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-682131 image rm kicbase/echo-server:functional-682131 --alsologtostderr: (2.539687842s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-682131
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-682131 image save --daemon kicbase/echo-server:functional-682131 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-682131
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-682131
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-682131
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-682131
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (190.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-240486 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0828 17:14:23.525406   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:23.532384   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:23.543706   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:23.565174   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:23.606588   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:23.688020   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:23.849564   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:24.171266   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:24.813468   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:26.094732   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:28.656695   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:33.778940   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:14:44.020287   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:15:04.502240   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:15:45.464052   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-240486 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m10.290188572s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (190.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-240486 -- rollout status deployment/busybox: (5.22754376s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-5pjcm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-dtp5b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-tnmmz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-5pjcm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-dtp5b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-tnmmz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-5pjcm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-dtp5b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-tnmmz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-5pjcm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-5pjcm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-dtp5b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-dtp5b -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-tnmmz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E0828 17:17:07.385761   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-240486 -- exec busybox-7dff88458-tnmmz -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-240486 -v=7 --alsologtostderr
E0828 17:18:00.240343   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:18:00.246818   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:18:00.258163   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:18:00.279518   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:18:00.320943   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:18:00.402411   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-240486 -v=7 --alsologtostderr: (52.853819797s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
E0828 17:18:00.563969   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:18:00.886005   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-240486 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0828 17:18:01.527337   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp testdata/cp-test.txt ha-240486:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486 "sudo cat /home/docker/cp-test.txt"
E0828 17:18:02.809018   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3516631358/001/cp-test_ha-240486.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486:/home/docker/cp-test.txt ha-240486-m02:/home/docker/cp-test_ha-240486_ha-240486-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m02 "sudo cat /home/docker/cp-test_ha-240486_ha-240486-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486:/home/docker/cp-test.txt ha-240486-m03:/home/docker/cp-test_ha-240486_ha-240486-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m03 "sudo cat /home/docker/cp-test_ha-240486_ha-240486-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486:/home/docker/cp-test.txt ha-240486-m04:/home/docker/cp-test_ha-240486_ha-240486-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m04 "sudo cat /home/docker/cp-test_ha-240486_ha-240486-m04.txt"
E0828 17:18:05.370592   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp testdata/cp-test.txt ha-240486-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3516631358/001/cp-test_ha-240486-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m02:/home/docker/cp-test.txt ha-240486:/home/docker/cp-test_ha-240486-m02_ha-240486.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486 "sudo cat /home/docker/cp-test_ha-240486-m02_ha-240486.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m02:/home/docker/cp-test.txt ha-240486-m03:/home/docker/cp-test_ha-240486-m02_ha-240486-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m03 "sudo cat /home/docker/cp-test_ha-240486-m02_ha-240486-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m02:/home/docker/cp-test.txt ha-240486-m04:/home/docker/cp-test_ha-240486-m02_ha-240486-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m04 "sudo cat /home/docker/cp-test_ha-240486-m02_ha-240486-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp testdata/cp-test.txt ha-240486-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3516631358/001/cp-test_ha-240486-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt ha-240486:/home/docker/cp-test_ha-240486-m03_ha-240486.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486 "sudo cat /home/docker/cp-test_ha-240486-m03_ha-240486.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt ha-240486-m02:/home/docker/cp-test_ha-240486-m03_ha-240486-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m02 "sudo cat /home/docker/cp-test_ha-240486-m03_ha-240486-m02.txt"
E0828 17:18:10.492814   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m03:/home/docker/cp-test.txt ha-240486-m04:/home/docker/cp-test_ha-240486-m03_ha-240486-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m04 "sudo cat /home/docker/cp-test_ha-240486-m03_ha-240486-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp testdata/cp-test.txt ha-240486-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3516631358/001/cp-test_ha-240486-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt ha-240486:/home/docker/cp-test_ha-240486-m04_ha-240486.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486 "sudo cat /home/docker/cp-test_ha-240486-m04_ha-240486.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt ha-240486-m02:/home/docker/cp-test_ha-240486-m04_ha-240486-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m02 "sudo cat /home/docker/cp-test_ha-240486-m04_ha-240486-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 cp ha-240486-m04:/home/docker/cp-test.txt ha-240486-m03:/home/docker/cp-test_ha-240486-m04_ha-240486-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 ssh -n ha-240486-m03 "sudo cat /home/docker/cp-test_ha-240486-m04_ha-240486-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.454477541s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-240486 node delete m03 -v=7 --alsologtostderr: (15.703462205s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (279.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-240486 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0828 17:33:00.240507   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:34:23.303790   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:34:23.523832   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-240486 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m39.230019266s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (279.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-240486 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-240486 --control-plane -v=7 --alsologtostderr: (1m18.952513385s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-240486 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.51s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-912766 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0828 17:38:00.239747   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-912766 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.811108908s)
--- PASS: TestJSONOutput/start/Command (80.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-912766 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-912766 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-912766 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-912766 --output=json --user=testUser: (7.330340086s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-714591 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-714591 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.251133ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1e9c1980-b76f-4cdd-b8ea-13c268390ebe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-714591] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"00f83421-340c-4ef9-ad60-53a799775ec8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19529"}}
	{"specversion":"1.0","id":"018e21fd-6dd2-427e-ad3d-8dd56f901398","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f2a3e0e6-71b4-4830-8eee-30c42935f1ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig"}}
	{"specversion":"1.0","id":"fdddbf97-b7a7-4e14-8cf3-d2929649e227","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube"}}
	{"specversion":"1.0","id":"b346179c-c974-46e8-b42b-95f779c6d36e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0684eb2c-cf7b-4861-9d0e-cfbdba99e30c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d5e1b07a-8586-46af-ab6c-0f7567685f17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-714591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-714591
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (84.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-104664 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-104664 --driver=kvm2  --container-runtime=crio: (39.683145046s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-107611 --driver=kvm2  --container-runtime=crio
E0828 17:39:23.525652   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-107611 --driver=kvm2  --container-runtime=crio: (41.823214829s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-104664
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-107611
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-107611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-107611
helpers_test.go:175: Cleaning up "first-104664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-104664
--- PASS: TestMinikubeProfile (84.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-234199 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-234199 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (30.323812615s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-234199 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-234199 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-247451 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-247451 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.117700311s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-247451 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-247451 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-234199 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-247451 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-247451 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-247451
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-247451: (1.270747615s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-247451
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-247451: (22.128078778s)
--- PASS: TestMountStart/serial/RestartStopped (23.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-247451 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-247451 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-168922 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0828 17:43:00.240401   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-168922 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m48.294766767s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-168922 -- rollout status deployment/busybox: (4.325044537s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- exec busybox-7dff88458-92kwg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- exec busybox-7dff88458-w6glt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- exec busybox-7dff88458-92kwg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- exec busybox-7dff88458-w6glt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- exec busybox-7dff88458-92kwg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- exec busybox-7dff88458-w6glt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.70s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- exec busybox-7dff88458-92kwg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- exec busybox-7dff88458-92kwg -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- exec busybox-7dff88458-w6glt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-168922 -- exec busybox-7dff88458-w6glt -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-168922 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-168922 -v 3 --alsologtostderr: (51.417899247s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.97s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-168922 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp testdata/cp-test.txt multinode-168922:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp multinode-168922:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1181089229/001/cp-test_multinode-168922.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp multinode-168922:/home/docker/cp-test.txt multinode-168922-m02:/home/docker/cp-test_multinode-168922_multinode-168922-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m02 "sudo cat /home/docker/cp-test_multinode-168922_multinode-168922-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp multinode-168922:/home/docker/cp-test.txt multinode-168922-m03:/home/docker/cp-test_multinode-168922_multinode-168922-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m03 "sudo cat /home/docker/cp-test_multinode-168922_multinode-168922-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp testdata/cp-test.txt multinode-168922-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp multinode-168922-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1181089229/001/cp-test_multinode-168922-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp multinode-168922-m02:/home/docker/cp-test.txt multinode-168922:/home/docker/cp-test_multinode-168922-m02_multinode-168922.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922 "sudo cat /home/docker/cp-test_multinode-168922-m02_multinode-168922.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp multinode-168922-m02:/home/docker/cp-test.txt multinode-168922-m03:/home/docker/cp-test_multinode-168922-m02_multinode-168922-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m03 "sudo cat /home/docker/cp-test_multinode-168922-m02_multinode-168922-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp testdata/cp-test.txt multinode-168922-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp multinode-168922-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1181089229/001/cp-test_multinode-168922-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp multinode-168922-m03:/home/docker/cp-test.txt multinode-168922:/home/docker/cp-test_multinode-168922-m03_multinode-168922.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922 "sudo cat /home/docker/cp-test_multinode-168922-m03_multinode-168922.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 cp multinode-168922-m03:/home/docker/cp-test.txt multinode-168922-m02:/home/docker/cp-test_multinode-168922-m03_multinode-168922-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 ssh -n multinode-168922-m02 "sudo cat /home/docker/cp-test_multinode-168922-m03_multinode-168922-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-168922 node stop m03: (1.339668344s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-168922 status: exit status 7 (417.812814ms)

                                                
                                                
-- stdout --
	multinode-168922
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-168922-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-168922-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-168922 status --alsologtostderr: exit status 7 (412.695947ms)

                                                
                                                
-- stdout --
	multinode-168922
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-168922-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-168922-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 17:44:16.909962   46567 out.go:345] Setting OutFile to fd 1 ...
	I0828 17:44:16.910231   46567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:44:16.910241   46567 out.go:358] Setting ErrFile to fd 2...
	I0828 17:44:16.910247   46567 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 17:44:16.910452   46567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 17:44:16.910639   46567 out.go:352] Setting JSON to false
	I0828 17:44:16.910669   46567 mustload.go:65] Loading cluster: multinode-168922
	I0828 17:44:16.910771   46567 notify.go:220] Checking for updates...
	I0828 17:44:16.911081   46567 config.go:182] Loaded profile config "multinode-168922": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 17:44:16.911097   46567 status.go:255] checking status of multinode-168922 ...
	I0828 17:44:16.911462   46567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:44:16.911621   46567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:44:16.930272   46567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I0828 17:44:16.930736   46567 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:44:16.931302   46567 main.go:141] libmachine: Using API Version  1
	I0828 17:44:16.931322   46567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:44:16.931777   46567 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:44:16.932014   46567 main.go:141] libmachine: (multinode-168922) Calling .GetState
	I0828 17:44:16.933507   46567 status.go:330] multinode-168922 host status = "Running" (err=<nil>)
	I0828 17:44:16.933530   46567 host.go:66] Checking if "multinode-168922" exists ...
	I0828 17:44:16.933944   46567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:44:16.934022   46567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:44:16.949350   46567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0828 17:44:16.949849   46567 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:44:16.950372   46567 main.go:141] libmachine: Using API Version  1
	I0828 17:44:16.950395   46567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:44:16.950716   46567 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:44:16.950878   46567 main.go:141] libmachine: (multinode-168922) Calling .GetIP
	I0828 17:44:16.953417   46567 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:44:16.953771   46567 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:44:16.953798   46567 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:44:16.953919   46567 host.go:66] Checking if "multinode-168922" exists ...
	I0828 17:44:16.954226   46567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:44:16.954265   46567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:44:16.969337   46567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40251
	I0828 17:44:16.969764   46567 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:44:16.970211   46567 main.go:141] libmachine: Using API Version  1
	I0828 17:44:16.970237   46567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:44:16.970521   46567 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:44:16.970687   46567 main.go:141] libmachine: (multinode-168922) Calling .DriverName
	I0828 17:44:16.970870   46567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:44:16.970900   46567 main.go:141] libmachine: (multinode-168922) Calling .GetSSHHostname
	I0828 17:44:16.973328   46567 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:44:16.973648   46567 main.go:141] libmachine: (multinode-168922) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:02:48:dd", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:41:34 +0000 UTC Type:0 Mac:52:54:00:02:48:dd Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-168922 Clientid:01:52:54:00:02:48:dd}
	I0828 17:44:16.973680   46567 main.go:141] libmachine: (multinode-168922) DBG | domain multinode-168922 has defined IP address 192.168.39.123 and MAC address 52:54:00:02:48:dd in network mk-multinode-168922
	I0828 17:44:16.973826   46567 main.go:141] libmachine: (multinode-168922) Calling .GetSSHPort
	I0828 17:44:16.973969   46567 main.go:141] libmachine: (multinode-168922) Calling .GetSSHKeyPath
	I0828 17:44:16.974111   46567 main.go:141] libmachine: (multinode-168922) Calling .GetSSHUsername
	I0828 17:44:16.974255   46567 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/multinode-168922/id_rsa Username:docker}
	I0828 17:44:17.052970   46567 ssh_runner.go:195] Run: systemctl --version
	I0828 17:44:17.058447   46567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:44:17.075427   46567 kubeconfig.go:125] found "multinode-168922" server: "https://192.168.39.123:8443"
	I0828 17:44:17.075463   46567 api_server.go:166] Checking apiserver status ...
	I0828 17:44:17.075513   46567 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 17:44:17.091075   46567 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1048/cgroup
	W0828 17:44:17.100886   46567 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1048/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 17:44:17.100960   46567 ssh_runner.go:195] Run: ls
	I0828 17:44:17.105489   46567 api_server.go:253] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I0828 17:44:17.109775   46567 api_server.go:279] https://192.168.39.123:8443/healthz returned 200:
	ok
	I0828 17:44:17.109795   46567 status.go:422] multinode-168922 apiserver status = Running (err=<nil>)
	I0828 17:44:17.109809   46567 status.go:257] multinode-168922 status: &{Name:multinode-168922 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:44:17.109834   46567 status.go:255] checking status of multinode-168922-m02 ...
	I0828 17:44:17.110161   46567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:44:17.110195   46567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:44:17.126098   46567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44769
	I0828 17:44:17.126449   46567 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:44:17.126894   46567 main.go:141] libmachine: Using API Version  1
	I0828 17:44:17.126916   46567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:44:17.127207   46567 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:44:17.127388   46567 main.go:141] libmachine: (multinode-168922-m02) Calling .GetState
	I0828 17:44:17.128891   46567 status.go:330] multinode-168922-m02 host status = "Running" (err=<nil>)
	I0828 17:44:17.128910   46567 host.go:66] Checking if "multinode-168922-m02" exists ...
	I0828 17:44:17.129180   46567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:44:17.129219   46567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:44:17.144296   46567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I0828 17:44:17.144703   46567 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:44:17.145104   46567 main.go:141] libmachine: Using API Version  1
	I0828 17:44:17.145127   46567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:44:17.145430   46567 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:44:17.145635   46567 main.go:141] libmachine: (multinode-168922-m02) Calling .GetIP
	I0828 17:44:17.148171   46567 main.go:141] libmachine: (multinode-168922-m02) DBG | domain multinode-168922-m02 has defined MAC address 52:54:00:0c:06:ea in network mk-multinode-168922
	I0828 17:44:17.148515   46567 main.go:141] libmachine: (multinode-168922-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:06:ea", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:42:36 +0000 UTC Type:0 Mac:52:54:00:0c:06:ea Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-168922-m02 Clientid:01:52:54:00:0c:06:ea}
	I0828 17:44:17.148539   46567 main.go:141] libmachine: (multinode-168922-m02) DBG | domain multinode-168922-m02 has defined IP address 192.168.39.88 and MAC address 52:54:00:0c:06:ea in network mk-multinode-168922
	I0828 17:44:17.148659   46567 host.go:66] Checking if "multinode-168922-m02" exists ...
	I0828 17:44:17.148951   46567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:44:17.148983   46567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:44:17.163884   46567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0828 17:44:17.164244   46567 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:44:17.164682   46567 main.go:141] libmachine: Using API Version  1
	I0828 17:44:17.164702   46567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:44:17.164966   46567 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:44:17.165211   46567 main.go:141] libmachine: (multinode-168922-m02) Calling .DriverName
	I0828 17:44:17.165384   46567 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 17:44:17.165403   46567 main.go:141] libmachine: (multinode-168922-m02) Calling .GetSSHHostname
	I0828 17:44:17.167953   46567 main.go:141] libmachine: (multinode-168922-m02) DBG | domain multinode-168922-m02 has defined MAC address 52:54:00:0c:06:ea in network mk-multinode-168922
	I0828 17:44:17.168358   46567 main.go:141] libmachine: (multinode-168922-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:06:ea", ip: ""} in network mk-multinode-168922: {Iface:virbr1 ExpiryTime:2024-08-28 18:42:36 +0000 UTC Type:0 Mac:52:54:00:0c:06:ea Iaid: IPaddr:192.168.39.88 Prefix:24 Hostname:multinode-168922-m02 Clientid:01:52:54:00:0c:06:ea}
	I0828 17:44:17.168389   46567 main.go:141] libmachine: (multinode-168922-m02) DBG | domain multinode-168922-m02 has defined IP address 192.168.39.88 and MAC address 52:54:00:0c:06:ea in network mk-multinode-168922
	I0828 17:44:17.168470   46567 main.go:141] libmachine: (multinode-168922-m02) Calling .GetSSHPort
	I0828 17:44:17.168644   46567 main.go:141] libmachine: (multinode-168922-m02) Calling .GetSSHKeyPath
	I0828 17:44:17.168809   46567 main.go:141] libmachine: (multinode-168922-m02) Calling .GetSSHUsername
	I0828 17:44:17.168953   46567 sshutil.go:53] new ssh client: &{IP:192.168.39.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19529-10317/.minikube/machines/multinode-168922-m02/id_rsa Username:docker}
	I0828 17:44:17.248943   46567 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 17:44:17.262289   46567 status.go:257] multinode-168922-m02 status: &{Name:multinode-168922-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0828 17:44:17.262323   46567 status.go:255] checking status of multinode-168922-m03 ...
	I0828 17:44:17.262635   46567 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0828 17:44:17.262677   46567 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0828 17:44:17.278469   46567 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0828 17:44:17.278951   46567 main.go:141] libmachine: () Calling .GetVersion
	I0828 17:44:17.279467   46567 main.go:141] libmachine: Using API Version  1
	I0828 17:44:17.279505   46567 main.go:141] libmachine: () Calling .SetConfigRaw
	I0828 17:44:17.279811   46567 main.go:141] libmachine: () Calling .GetMachineName
	I0828 17:44:17.279985   46567 main.go:141] libmachine: (multinode-168922-m03) Calling .GetState
	I0828 17:44:17.281554   46567 status.go:330] multinode-168922-m03 host status = "Stopped" (err=<nil>)
	I0828 17:44:17.281570   46567 status.go:343] host is not running, skipping remaining checks
	I0828 17:44:17.281578   46567 status.go:257] multinode-168922-m03 status: &{Name:multinode-168922-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 node start m03 -v=7 --alsologtostderr
E0828 17:44:23.524035   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-168922 node start m03 -v=7 --alsologtostderr: (38.734033411s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-168922 node delete m03: (1.658835282s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (174.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-168922 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0828 17:53:00.239724   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 17:54:23.524718   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-168922 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m53.751021431s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-168922 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (174.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-168922
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-168922-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-168922-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (57.76088ms)

                                                
                                                
-- stdout --
	* [multinode-168922-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-168922-m02' is duplicated with machine name 'multinode-168922-m02' in profile 'multinode-168922'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-168922-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-168922-m03 --driver=kvm2  --container-runtime=crio: (41.481778354s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-168922
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-168922: exit status 80 (201.09492ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-168922 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-168922-m03 already exists in multinode-168922-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-168922-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.75s)

                                                
                                    
x
+
TestScheduledStopUnix (111s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-516717 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-516717 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.469963754s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516717 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-516717 -n scheduled-stop-516717
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516717 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516717 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-516717 -n scheduled-stop-516717
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-516717
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-516717 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-516717
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-516717: exit status 7 (63.764709ms)

                                                
                                                
-- stdout --
	scheduled-stop-516717
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-516717 -n scheduled-stop-516717
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-516717 -n scheduled-stop-516717: exit status 7 (63.693854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-516717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-516717
--- PASS: TestScheduledStopUnix (111.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (180.51s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1999404408 start -p running-upgrade-783149 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1999404408 start -p running-upgrade-783149 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m18.897597735s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-783149 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-783149 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m38.335792981s)
helpers_test.go:175: Cleaning up "running-upgrade-783149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-783149
--- PASS: TestRunningBinaryUpgrade (180.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-682143 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-682143 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (71.914078ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-682143] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-682143 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-682143 --driver=kvm2  --container-runtime=crio: (1m36.125339973s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-682143 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-647068 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-647068 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (98.131799ms)

                                                
                                                
-- stdout --
	* [false-647068] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19529
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 18:02:01.282676   54316 out.go:345] Setting OutFile to fd 1 ...
	I0828 18:02:01.282785   54316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:02:01.282797   54316 out.go:358] Setting ErrFile to fd 2...
	I0828 18:02:01.282802   54316 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 18:02:01.283337   54316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19529-10317/.minikube/bin
	I0828 18:02:01.284525   54316 out.go:352] Setting JSON to false
	I0828 18:02:01.285716   54316 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6267,"bootTime":1724861854,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0828 18:02:01.285804   54316 start.go:139] virtualization: kvm guest
	I0828 18:02:01.287686   54316 out.go:177] * [false-647068] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0828 18:02:01.288821   54316 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 18:02:01.288860   54316 notify.go:220] Checking for updates...
	I0828 18:02:01.291226   54316 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 18:02:01.292373   54316 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19529-10317/kubeconfig
	I0828 18:02:01.293493   54316 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19529-10317/.minikube
	I0828 18:02:01.294749   54316 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0828 18:02:01.295925   54316 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 18:02:01.297531   54316 config.go:182] Loaded profile config "NoKubernetes-682143": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:02:01.297654   54316 config.go:182] Loaded profile config "force-systemd-env-755013": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:02:01.297738   54316 config.go:182] Loaded profile config "offline-crio-652855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0828 18:02:01.297806   54316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 18:02:01.334221   54316 out.go:177] * Using the kvm2 driver based on user configuration
	I0828 18:02:01.335333   54316 start.go:297] selected driver: kvm2
	I0828 18:02:01.335348   54316 start.go:901] validating driver "kvm2" against <nil>
	I0828 18:02:01.335358   54316 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 18:02:01.337191   54316 out.go:201] 
	W0828 18:02:01.338336   54316 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0828 18:02:01.339402   54316 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-647068 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-647068" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-647068" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-647068

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-647068"

                                                
                                                
----------------------- debugLogs end: false-647068 [took: 2.59509443s] --------------------------------
helpers_test.go:175: Cleaning up "false-647068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-647068
--- PASS: TestNetworkPlugins/group/false (2.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (45.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-682143 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-682143 --no-kubernetes --driver=kvm2  --container-runtime=crio: (44.39565465s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-682143 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-682143 status -o json: exit status 2 (255.434177ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-682143","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-682143
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-682143: (1.012282427s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (45.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (45.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-682143 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0828 18:04:23.525214   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-682143 --no-kubernetes --driver=kvm2  --container-runtime=crio: (45.173206945s)
--- PASS: TestNoKubernetes/serial/Start (45.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-682143 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-682143 "sudo systemctl is-active --quiet service kubelet": exit status 1 (193.945046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-682143
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-682143: (1.282010426s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (59.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-682143 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-682143 --driver=kvm2  --container-runtime=crio: (59.767476445s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (59.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-682143 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-682143 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.037459ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (101.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3573004776 start -p stopped-upgrade-826492 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3573004776 start -p stopped-upgrade-826492 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (51.690008934s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3573004776 -p stopped-upgrade-826492 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3573004776 -p stopped-upgrade-826492 stop: (1.418989229s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-826492 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-826492 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.163881469s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (101.27s)

                                                
                                    
x
+
TestPause/serial/Start (51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-454941 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-454941 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (50.99572906s)
--- PASS: TestPause/serial/Start (51.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (67.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m7.16185521s)
--- PASS: TestNetworkPlugins/group/auto/Start (67.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-826492
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0828 18:08:00.239776   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m27.07865418s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.08s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (57.31s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-454941 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-454941 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.279912056s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (57.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-647068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-647068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c2qbz" [0e3f6463-833d-4e22-9b51-6a946b5b080c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c2qbz" [0e3f6463-833d-4e22-9b51-6a946b5b080c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003815659s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (16.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-647068 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-647068 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.176397791s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-647068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (16.12s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-454941 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-454941 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-454941 --output=json --layout=cluster: exit status 2 (227.17837ms)

                                                
                                                
-- stdout --
	{"Name":"pause-454941","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-454941","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.23s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-454941 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.86s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-454941 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-454941 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-454941 --alsologtostderr -v=5: (1.009171956s)
--- PASS: TestPause/serial/DeletePaused (1.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m23.80591445s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rp4rl" [7cd853f1-44c7-429f-b243-df540a25e62a] Running
E0828 18:09:23.523736   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/addons-990097/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004649688s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-647068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-647068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-926v9" [d4fd1af5-3ba4-4817-ac7e-0f145afe5b04] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-926v9" [d4fd1af5-3ba4-4817-ac7e-0f145afe5b04] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00431778s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (88.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m28.570726917s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (88.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-647068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (119.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m59.612632036s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (119.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-t2w4b" [d5544d24-1b79-48d4-93ce-7f255a8fa946] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006243145s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-647068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-647068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5p2wf" [8ada85fd-9c46-4f97-81f8-634ebbdbbb41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5p2wf" [8ada85fd-9c46-4f97-81f8-634ebbdbbb41] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.007874516s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-647068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-647068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-647068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9rm9n" [cbb0ade8-ea61-418f-96cd-10f898ba366e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9rm9n" [cbb0ade8-ea61-418f-96cd-10f898ba366e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.004817617s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-647068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m14.478830373s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-647068 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m35.512136949s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-647068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-647068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wrhrc" [4e96de23-9a76-4200-bf34-40359b9c8364] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wrhrc" [4e96de23-9a76-4200-bf34-40359b9c8364] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006511377s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-647068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (88.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-072854 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-072854 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m28.424317588s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (88.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-cfpxv" [ce80d168-1c3e-4fe4-a091-ba224040d412] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.007023028s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-647068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-647068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-829hb" [8b3fbce6-0a5b-4fb3-b6b8-a0396792bccf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-829hb" [8b3fbce6-0a5b-4fb3-b6b8-a0396792bccf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.004654635s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-647068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-647068 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-647068 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9jpbw" [8528c3c4-7c0b-4e7d-b762-28eaf6120d80] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9jpbw" [8528c3c4-7c0b-4e7d-b762-28eaf6120d80] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005380524s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-014980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-014980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m20.931330557s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-647068 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-647068 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-640552 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0828 18:13:51.163749   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:13:51.170161   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:13:51.182303   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:13:51.203657   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:13:51.245115   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:13:51.326940   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:13:51.488533   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-640552 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (54.870207891s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-072854 create -f testdata/busybox.yaml
E0828 18:13:51.810028   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e90c8374-18c5-4c02-8189-c6ebe492f3a8] Pending
E0828 18:13:52.452102   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [e90c8374-18c5-4c02-8189-c6ebe492f3a8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0828 18:13:53.733723   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:13:56.296022   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [e90c8374-18c5-4c02-8189-c6ebe492f3a8] Running
E0828 18:14:01.417768   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.003583028s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-072854 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-072854 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-072854 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051489005s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-072854 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-014980 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [abd95b61-ecdf-4f7a-a3e3-6c8c1507ace3] Pending
E0828 18:14:31.712643   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [abd95b61-ecdf-4f7a-a3e3-6c8c1507ace3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [abd95b61-ecdf-4f7a-a3e3-6c8c1507ace3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003773656s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-014980 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-640552 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f826550a-fcfa-4f39-9c73-44834e6e4721] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0828 18:14:32.141190   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/auto-647068/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [f826550a-fcfa-4f39-9c73-44834e6e4721] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004886093s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-640552 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-014980 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-014980 describe deploy/metrics-server -n kube-system
E0828 18:14:41.954906   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/kindnet-647068/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-640552 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-640552 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (642.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-072854 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-072854 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m41.806940029s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-072854 -n no-preload-072854
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (642.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (578.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-014980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-014980 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m37.920814639s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-014980 -n embed-certs-014980
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (578.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (555.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-640552 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0828 18:17:15.891991   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:25.611917   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/custom-flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:34.434691   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:34.441904   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:34.453245   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:34.474610   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:34.516018   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:34.597434   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:34.758960   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:35.080962   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:35.723138   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:36.374190   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:37.005204   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:39.567604   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:44.689096   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:17:54.931072   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:00.240336   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:09.993924   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:10.000309   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:10.011710   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:10.033104   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:10.074512   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:10.155996   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:10.317810   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:10.639516   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:11.280975   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:12.563253   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:15.125305   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:15.413252   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:17.335940   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:20.246875   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:28.724960   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/calico-647068/client.crt: no such file or directory" logger="UnhandledError"
E0828 18:18:30.488995   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/bridge-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-640552 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m14.910137709s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-640552 -n default-k8s-diff-port-640552
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (555.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-131737 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-131737 --alsologtostderr -v=3: (4.282705825s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131737 -n old-k8s-version-131737: exit status 7 (59.256452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-131737 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-835349 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0828 18:41:55.397286   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/enable-default-cni-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-835349 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (43.166243113s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-835349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-835349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025402197s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-835349 --alsologtostderr -v=3
E0828 18:42:34.435530   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/flannel-647068/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-835349 --alsologtostderr -v=3: (10.53061556s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-835349 -n newest-cni-835349
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-835349 -n newest-cni-835349: exit status 7 (65.802787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-835349 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-835349 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0828 18:43:00.240042   17528 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19529-10317/.minikube/profiles/functional-682131/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-835349 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (34.988161523s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-835349 -n newest-cni-835349
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-835349 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-835349 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-835349 -n newest-cni-835349
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-835349 -n newest-cni-835349: exit status 2 (229.856986ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-835349 -n newest-cni-835349
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-835349 -n newest-cni-835349: exit status 2 (227.873382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-835349 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-835349 -n newest-cni-835349
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-835349 -n newest-cni-835349
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.21s)

                                                
                                    

Test skip (37/318)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
254 TestNetworkPlugins/group/kubenet 2.78
263 TestNetworkPlugins/group/cilium 3.06
278 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-647068 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-647068" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-647068" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-647068

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-647068"

                                                
                                                
----------------------- debugLogs end: kubenet-647068 [took: 2.645698644s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-647068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-647068
--- SKIP: TestNetworkPlugins/group/kubenet (2.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-647068 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-647068" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-647068

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-647068" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-647068"

                                                
                                                
----------------------- debugLogs end: cilium-647068 [took: 2.927550245s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-647068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-647068
--- SKIP: TestNetworkPlugins/group/cilium (3.06s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-341028" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-341028
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard